Image for post
Image for post

Why people do what they do (and don’t do what they don’t do)

How to understand better what motivates people’s behaviour — in organizations and elsewhere — and how this can help us manage behavioural change, in others and in ourselves

People don’t always do the right thing. It sounds like (and arguably is) a truism, but isn’t it a bit baffling? If employees know that they need to file critical paperwork on time, why are they not doing it? If a boss continually proclaims to the importance of meeting 1:1 with her team members at least once per month, why is she barely managing three such meetings in a year? If we know that it would be good for us to snack less and exercise more, how come we still polish off a packet of biscuits every evening and laze on the couch watching Netflix, rather than have an apple after dinner and go for a jog?

People usually know very well what the right thing is, and yet they don’t do it. Why don’t people do the right thing — for their teams, for their company, for their families and for themselves?

Perhaps this is the wrong question — or at least the wrong question to start with. We should first understand why people do what they do, not why they don’t do what they ought to do. We should seek to understand why the world is the way it is, rather than why it is not the way we think it should be.

One way that I have found to be helpful in this quest for understanding is to see behaviour as the result of a decision, of a choice between two or more options, in which there are trade-offs between the positive and negative characteristics of each one. To understand why people what they do do, we must look at how they choose between the different things they could do. And I have found that, by and large, these decisions can be seen as converging in three clusters.

Cluster 1: Weighing up

Behaviour precedes the emergence of complex nervous systems, let alone consciousness, by a long time. As soon as there was life, there was behaviour, and our most distant ancestors already faced choices between what was good for them and what was not good for them. The members of a species that ‘chose’ the behaviour that was most likely to allow them to survive, prosper and procreate had a better chance to develop.

On and on, the driving force was the balance between two opposite factors. If the ‘cost’ of evolving a particular feature in a species was outweighed by the ‘benefits’ it bestowed, it ended up fitter than its less well-featured counterparts, and prospered, procreated and survived. Evolution has, in essence, been an economic process, and this realization supports economist Robert Frank’s argument that Charles Darwin may have a stronger claim to the title of father of economics than Adam Smith.

This process has been repeated countless times and led to the emergence of our own species. And we have an ability that no other species has, at least not remotely to the same degree as us: the ability to reason. We can compare costs and benefits expressed in ways that are unequal in nature (like time and money, reputation and effort, or excitement and compassion). We can even handle uncertainty. And this allows us to handle a vast array of preferences, way more than all our ancestors, who were mostly concerned with having enough food, being able to shelter from predators and the elements, and mating.

The first cluster of decisions contains choices that are made by weighing up the costs and benefits involved. We use our reasoning power on the available facts, take into account our preferences, work out the consequences of the different possibilities ahead of us, and choose the one that is the most appropriate — the one that we think, on the whole, best serves our interests. This mode of decision-making would appear to align well with the traditional concept of economic decision-making — the homo economicus — although it easily extends beyond the narrow, materialist view. Cost-benefit analysis works just as well for deciding between projects that need funding, as for deciding between driving to the station to pick up your spouse from the station when it’s raining cats and dogs (you may imagine a plausible reward) and remaining indoors and starting the next chapter in the gripping book you’re reading (you may imagine a plausible castigation).

In our place of work, we may reason that the cost of inviting the six members of our team over to our home for a meal will be outweighed by the benefit of more team cohesion and loyalty, and a better chance that it delivers against its goals. We may reason that staying late to help out a colleague will strengthen our reputation as a dependable member of staff in the eyes of our boss; or that focusing relentlessly on pursuing a target that has come down from the top is more likely to safeguard the bonus we’re hoping for.

In our private life, we may reconsider the impulse of taking the car for a daytrip, and reason that the peace of mind (no worries about traffic jams or finding a parking spot) of taking the train is worth the extra cash cost*. We may also reason that it is worth spending a couple of hours on a Saturday afternoon searching for the best home insurance deal if we expect this will save us £80.

With this mode of decision-making, we consciously recognize that we have conflicting preferences, and we work out, through reasoning, what combination or compromise is serving us the best.

Cluster 2: Beliefs

Sometimes, however, we think there is no need to go to the effort of weighing up costs and benefits. Here we find the second cluster of decisions. We already know what is the best thing to do (or what we should definitely not do) — or at least we believe we do.

Such beliefs can be quite profound, and are often not rooted in evidence. Moral and ideological beliefs may guide us (almost) unconditionally to behave in a certain way, or indeed to not behave in that way. Even if we know we can do so unseen and with zero risk of punishment, most of us will still refrain from stealing money from a friend, despite the fact that it would provide a clear benefit. The values we hold dear, either personally (e.g. regarding our close family and friends) or professionally (e.g., ethical codes of practice, but also corporate values) can be strong influences on our behaviour. Likewise with membership of particular groups where behavioural conformity signals allegiance and secures belonging (think of the girl guides or boy scouts, but also of cultural groupings like goths or punks, or even small, ad hoc cliques of friends). We may act according to stereotypical beliefs about certain others, whether their citizenship, ethnic origin, or the department they belong to at work, or out of belief that we are entitled to something (or others are not), or because we have beliefs about other people’s motives.

At work, we may have a colleague who is so strongly convinced that her project will be a success that she has little or no time to consider doubt, and puts all her energy into helping her team dealing with all the challenges it faces. We may have a boss who believes that a leader should be at work before his team members and should be the last of his team to go home, regardless of whether it is serving a purpose or what family demands there are. Perhaps we have met a boss who is so strongly of the opinion that work and private life need to be separate that he finds sending work email after 6pm and before 8am or during the weekends to be unacceptable, no exceptions (and beware if you are found out doing so).

In their private lives, people also sometimes behave in accordance with beliefs, without much, if any, reasoned weighing up of the different options. One neighbour may choose to mow his lawn on a Sunday afternoon, convinced he is entitled to do so whenever he bloody well wants. The other decides to get into a fierce argument with him, firmly believing she is entitled to peace and quiet at that time. Some people may believe that cleanliness is next to godliness and keep their place neat and tidy (without considering how all that time could be spent differently); others may believe that the best way to fulfil one’s purpose is to read and accumulate books by the cartload, which then collect in piles around the house, in anticipation of the acquisition of yet another bookcase.

The principal characteristic of this decision cluster is the strength of the underlying preference: we believe we know what is most important, what must be done, what is right and just. There is no attempt at reasoning and no weighing up of the cost and benefits of the chosen option or any other options, no compromise.

Cluster 3: Context

The conviction that we already know the outcome is not the only situation in which we don’t engage in deliberate reasoning to work out the best behaviour. Sometimes we don’t have any preference, or it is so weak that we barely discern a difference between the options available.

But even when the choices seem inconsequential and we don’t much care what is best to do, or when we are uncertain and are unable or unwilling to address it, we are nonetheless still facing different options ahead of us. We may not have a strong preference for (or indeed against) either an apple or a doughnut, but we still somehow have to choose between the two.

This is when we are, often unconsciously, led by what appears the easiest, the quickest, the most obvious, the safest or least risky, the most enjoyable or gratifying of options. And in that choice, factors like the context, the situation, or our inherent tendencies and biases often play a decisive part.

There is an array of concepts in the behavioural sciences that are associated with this kind of unthinking behaviour. Choice architecture, a term coined by Richard Thaler and Cass Sunstein in their book Nudge, is a prominent influence on what we choose. Defaults, physical proximity, the order in which possibilities are presented on a list, even the number of options can all sway us one way or another. We may imitate other people’s behaviour without much thinking (known as social proof), be deterred by behaviour that appears risky or dangerous, or attracted by choices that are appealing through the (short term) benefits they promise or the instant gratification. And there are many more such influences that are linked with the situation, the context, or our inherent behavioural tendencies.

Social proof is an important behavioural driver in workplaces — we pick up many unconscious signals from our colleagues and act accordingly, without giving it much thought. From bringing cakes when it is our birthday to staying late (or going home on the dot), simply because that is what others also do. Or we may be inclined to support an organizational change because we have contributed to it (or be sceptical because it was concocted by others!) — this would be a case of the IKEA effect.

At home too, quite a bit of our behaviour is guided in the same way. If we don’t have a strong preference for how to spend our evening, it is likely going to be the same we did yesterday and the day before — perhaps read a book, spend time catching up with our social media or watch TV. If it is the latter, would you be watching the same shows if the channels that occupy the first slots in the programme guide were swapped with others further down the list, different, or is your viewing to some degree determined by whatever those channels are?

This decision mode and the previous one share the lack of reasoned deliberation about the possible behavioural options ahead. But while decisions that originate in the Beliefs cluster are hard to flip because of the strength of the conviction behind it, those in the Context cluster are more open to being flipped the. At a different time, or with a small alteration to the choice architecture, the decision could be nudged in the other direction.

Interactions

Our behaviour is not necessarily determined by a decision that falls entirely in one of these three clusters. We may visualize their positions on a kind of circular spectrum, mapped onto the two key dimensions of this behavioural analysis framework: the degree to which it follows from reasoning, and the strength of the preferences involved.

We may, for example, consider prior beliefs and convictions as one of the elements to trade off in a comparison between different behavioural options. How important do we believe it is to attend our daughter’s school concert — important enough to miss a crucial client presentation (that, if successful, might lead to a significant project)? Such a decision would lie between the Weighing up cluster and the Beliefs cluster. Another possibility is that the choice of the options that are being considered is guided by prior convictions — we may choose not to explore certain options, and to include others, based on belief and gut feel, rather than in a more objective way.

The choice of alternatives is also an area where the Context cluster and the Weighing up clusters can interact with each other. We may select the available options on the basis of what we are familiar with or what is prominent in our environment (known as the salience bias). Hyperbolic discounting (our tendency to overweigh benefits that occur early) may influence how we actually weigh up the different options.

In some situations where decisions are not made by reasoning, there may be some ambiguity as to whether the preference is strong, or weak. This can be the case with heuristics — one that is associated with strong prior evidence ( “it’s worked like this 100 times before”) or with a powerful influence ( “my Mum always said I should not drink with my meal but afterwards”) may lose its potency over time, and become more of a default choice, in the absence of another, even only slightly more powerful, influence. Where does the persistent pursuing of a project at work, or of an activity in our private life, just because of the time, the money or the effort that has already been spent on it, belong? This situation is known as the sunk cost fallacy, but is it an instance of a strong belief decision ( “whenever you’ve spent a lot, you must go on”), or of a situational decision ( “throwing away all that investment feels like a loss and we don’t like it”)?

The boundaries between these clusters are not always clear-cut. Human behaviour is a bit messy, and we should not be surprised that actual behaviours are sometimes not the result of a pure, crystal clear instance of a particular mode of decision-making. So, when investigating a particular behaviour, it makes sense to view it from the perspective of all three clusters and explore how each of the corresponding decision modes is implicated: reasoned or not? Easy to sway or not? That will already tell us a lot.

Emotions

You may wonder why the term ‘emotion’ has not popped up even once so far. The main reason is that emotion is effectively a critical component in every decision. It is not a differentiator between the three decision-making modes.

This is perhaps the clearest in the Beliefs cluster. Strong beliefs and convictions correspond naturally with powerful emotions, positive or negative. If you have, or have had, children in adolescence, you may be familiar with their propensity to either love or hate people, activities, music, food, places and so on. Most of us grow out of that kind of binary thinking, but we still maintain the emotional connection to our personal values, ideologies, and group affiliations. And we often still feel strongly about them.

It is also apparent in the Context cluster: we experience a more positive emotion if a behaviour is easier than when it is more difficult. Likewise, most people will choose to behave in a low-risk rather than a high-risk way (don’t sit in the first row, just in case you are called on stage!), and so on. It is thanks to our emotions that we choose the easiest, most obvious, safest, or more pleasant behaviour.

And when we analyse and weigh up options against each other? Ultimately, we want to choose the behaviour that is best (or least bad, if we’re in trouble). And the way to assess whether a behavioural option is more suitable than another, we make use of our emotions. If we buy a house with a garden, we may prefer a larger garden (and experience positive emotions with it), but not too large (experiencing the negative emotion of the effort needed to maintain it).

What can go wrong?

So now that we have some idea what might motivate our own and others’ behaviour, we can begin to consider how come we may be doing the wrong thing. Let’s look at some examples.

Weighing up

Imagine an organization in which the procurement team has been given a challenging target to reduce the cost of the purchased materials. They negotiate hard with suppliers and realize a contract at a lower price. But what they don’t realize is that, in order to make the deal worthwhile, the suppliers will crank down the hitherto ‘free’ engineering support. This causes a lot of problems in production, with costly stoppages and inefficient phone calls with the supplier. Before, an engineer would be coming out within hours. The procurement team were sure they were doing the right thing as they were meeting their target. But they were unaware of the consequences elsewhere, unaware of the organizational externality. Too narrow a view on the choices (tough negotiation vs a more collaborative arrangement and seeking a different compromise) and lack of awareness of the consequences can give rise to the illusion of doing the right thing while, ultimately, doing the wrong thing.

Another possibility is that the coverage of possibilities and consequences is good, but the importance that is assumed to the characteristics is distorted by strong positive or negative preferences rooted in belief. Why might a team leader not adequately consider all the proposals put forward by his team to improve the performance of the production process or solve issues? Consider a team leader with a strong belief that in order to understand the complexity of the production process, you need a lot of experience (and he has the most). When junior team members have ideas to improve performance or solve issues, he systematically undervalues their contributions, favouring the input of employees with longer tenure (and who tend to respect his authority). Despite considering all the proposals, he is potentially failing to spot the most useful ones.

Why might the bulk of the staff in a company actively avoid enrolling in a mandatory (and useful) training programme around the adoption of new processes and systems? Picture an organization that has been reshaped from an internally focused technical consultancy with a captive market, to an autonomous entity that competes in the outside market, from where it gets the majority of its business. This has been achieved through a major cultural change, in which all employees were given a clear message about the need to think commercially and incentives that encourage a relentless focus on the customer. Now, a significant upgrade of processes and systems is taking place, and everyone needs to set aside a substantial amount of time for training. Many employees see this as counter to their primary value: the customer comes first. Because of a nuance-less adherence to a core corporate value, they believe they are doing absolutely the right thing by resisting — and creatively finding ways to avoid — enrolling for the training.

Why might a senior team agree to different, more collaborative ways of working, but when it comes to it, appear to pay lip service to their good intentions? Here we have two organizations within the same company that do pretty much the same thing, but which, for historical reasons, have rarely if ever worked together in at least 25 years. More than half of the management teams in either organizations have never spoken to their counterpart. A new boss brings them together in a joint workshop, which ends on a high with lots of topics agreed for specific discussion and mutual promises to talk frequently across the 1,000-mile distance between the two. But it doesn’t happen. Not because it is not seen as valuable and worth the time, or because they don’t believe it is a good thing to do. The preference is there, but the status quo bias cultivated over more than a generation turns out to be overbearing, and there is no social proof — everyone is looking at their colleagues, but nobody is actually taking the first step.

Conclusion

Is this the final word in behavioural analysis? No — the story is ongoing. But over nearly a decade that I have been using and fine-tuning this approach, I have found that in almost all cases, it helps diagnose sometimes puzzling organizational dysfunction and reveal motives that were otherwise not apparent. Understanding why people — who presumably should know what is the right behaviour is — are doing the wrong thing is a critical step in figuring out how to remedy the issue.

___

*: if we take into account the full cost of driving, it may save us money too

Originally published at http://koenfucius.wordpress.com on August 13, 2020.

Written by

Accidental behavioural economist in search of wisdom. Uses insights from (behavioural) economics in organization development. On Twitter as @koenfucius

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store