(credit: Thor Edvardsen)

Inescapable philosophy?

Do we need philosophy to understand and explain our behaviour?

I am going to make a quick assumption here about your honesty. How come you never engage in something like shoplifting? Surely doing so is to your economic advantage. Is it because it is against the law? Maybe because you are scared of being punished or humiliated? What if you were certain you wouldn’t get caught — would you then choose to do it? Do you actually consciously consider the possibility every time you are in a shop?

Behaviour and choice are two sides of the same coin. Almost all our behaviour is determined by choices we make. Neoclassical economics treats us largely as rational, self-interested, utility maximizing beings — the so-called homo economicus — and seeks to explain how we choose, and hence behave, from that perspective. Not so fast, say psychologists and behavioural economists (the line between the two can be a bit fuzzy). We may like to think we are good at making rational decisions, but we have a few problems with willpower and self-control, quite a bit of our behaviour is a matter of habit or unconscious choice, and on top we bulge with cognitive biases and we fall prey to fallacies. We need those insights to explain our behaviour too.

Do these two approaches together give us all we need to understand people’s behaviour, and improve our decision-making? Not quite. Our choices are sometimes also determined by what we think is right and wrong — our moral intuition. Morality may not play a large part in choosing between a dark red or a beige jumper when we get dressed in the morning, but it creeps in when we are shopping (fair trade sugar or own brand?), want to get out of a visit to the in-laws (tell a white lie, or go anyway?), or indeed between sneaking an item past the attention of the shop assistant… or not.

Questions relating to ethics belong in the domain of philosophy. In the same way that a behavioural economist can get excited by situations where people’s intuitions are not necessarily a good guide to what is best for them, a philosopher’s eyes can light up at the thought of moral dilemmas, where moral intuition is likewise not necessarily a good guide. A classic thought experiment that explores such a dilemma is the trolley problem. British moral philosopher Philippa Foot first formulated it (almost as a by-the-by) in a 1967 paper in the Oxford Review.

Over the last 50 years, it has gained an unusual amount of fame for a philosophical device. In case you’ve never heard of it, or have forgotten, it is about the following question. Should you divert a runaway tram (as in Foot’s original) or trolley (in current parlance) headed for a track where five workers will be killed onto another track where just one worker will be killed, or instead do nothing?

A difficult choice. (image: Minnesota Historical Society)

Not everyone is convinced of the value of such thought experiments, though. A couple of weeks ago, an article in Current Affairs entitled “The trolley problem will tell you nothing useful about morality” left little doubt as to the view of the authors. They believe it is so far removed from any ordinary moral choices that it’s ‘close to nonsensical’.

Yet this very problem is being wheeled out to explore and debate the behaviour of autonomous cars. What if a self-driving car is confronted with a situation in which it can either mow down five pedestrians and save the two people it carries, or avoid the pedestrians by crashing into a concrete wall, killing the two passengers? MIT’s Media Lab turned questions like these into scenarios in its Moral Machine. Kill three elderly men and two elderly women who are legitimately crossing road on a green signal, or a couple with two boys and a baby who are crossing on the red signal? Try it out for yourself: there are 13 standard scenarios and hundreds of user-made ones (you can make your own too).

The trolley problem is not a good guide to help design the algorithms that (literally) drive an autonomous car. For the foreseeable future, cars will not be able to determine whether a person on the crossing is a devout church-goer or an evil killer. They will not know whether the café to the left is empty or hosting a kids’ birthday party, nor whether there is a mother with a baby in a pram about to emerge from behind a parked van.

Self-driving cars cannot help us make better moral choices in emergencies, based on better or more information. What they can do is help us avoid being distracted, or reacting impulsively but unwisely when something unexpected comes into the vehicle’s path. At best the artificial intelligence in self-driving cars can overcome our own weaknesses — from to the temptation to check our phone when it beeps and the susceptibility to road rage, to a tendency for tailgating the car in front and then slamming on the brakes when it slows down. But it is not some kind of superpower that can solve moral dilemmas.

On the other hand, the authors of the Current Affairs article are too quick to dismiss the trolley problem as guidance for our human moral choices. Some people, for example those working in healthcare, face tough choices much like it. A paramedic confronted with two critical victims of a car crash must decide on the spot who to treat first. When the UK’s National Institute for Health and Care Excellence (and equivalent bodies in other countries) recommend or reject a new medicine, they effectively make decisions about who will live (longer) and who will not. Funds are limited, and every pound spent on a new anti-cancer drug cannot be spent on diagnostic equipment for ambulances. These choices are barely less horrific than those in the trolley problem.

Or maybe an anti-cancer drug would be better? (image: Biotechnose)

And even in our more mundane lives, it is not because our moral dilemmas don’t always involve the choice between killing one person or five people that the choices we do face are trivial. A recent paper by Amitai Shenhav and colleagues describes how banal choices can cause as much anxiety as having to choose between options that are of great importance. Maybe a hypothetical life and death choice is not such a bad thought experiment to help us understand the nature of decisions that involve a moral dimension.

Should we let down our colleague who’s asking us to go for a drink and discuss a serious problem at work, or instead our partner who is cooking a meal this evening? If you’re deciding the fate of an underperforming member in your team, should you give them another last chance, or sack them (knowing that they won’t easily find a new job)? We frequently face situations not unlike these two examples: forced choices leading to a no win outcome… just like in the trolley problem. We can try to avoid them or look away, but that is not always possible.

Unlike irrational decision-making, where there is at least a rational benchmark against which it can be evaluated, such ethical dilemmas never have an obvious correct answer. There is no nudging, no behavioural economics trickery (and no artificial intelligence) that can help us make the right choice.

The best we can do is being aware of such quandaries. And the trolley problem, 50 years old this year, can still be a pretty good guide for us to learn to understand their complexity. If it makes undergraduate students, and the rest of us, realize the limitations of our simple moral intuitions, maybe we should wish it well for the next fifty years.

Originally published at koenfucius.wordpress.com on November 17, 2017.

Thank you for reading this story. Enjoyed it? Then please let me (and others) know by clicking or tapping the 👏 applause button nearby (more enjoyment = more claps). And please share it with others on social media — Twitter and Facebook have their own handy buttons on your left, for others you’ll have to do a bit more work yourself :-) Thanks again!

Accidental behavioural economist in search of wisdom. Uses insights from (behavioural) economics in organization development. On Twitter as @koenfucius

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store