Colourful dice
Colourful dice
(credit: HeungSoon via Pixabay)

A likely trade-off

Probabilities are hard to understand, but some mistakes may not be what they seem

Chance is a bit of a slippery concept for many of us. We can grasp that, if we toss a fair coin, the likelihood of heads (or tails) as a result is about 50%. If we roll a die, we also understand that each number between 1 and 6 is equally likely to turn up. We may even be able to work out that, when a colleague informs us that her new neighbours have three children, the chance that these all have the same gender is 1 in 4.

Yet even only slightly more complicated questions tend to faze us. If there are three of you in a room, the chance that at least two people have the same birthday is quite small — we get that intuitively. (It is in fact less than 1%.) But how many people would there need to be in a room for it to be almost certain that at least two of them share a birthday — say more than 99%? Our intuition might suggest something like 99% of 365. That feels nicely symmetrical: with three it’s less than 1%, so with about 365–3 it should be more than 99%. But unless you know the answer or are a probability calculus wizard, you might be surprised to learn that the correct answer is 57. (See the Wikipedia entry on the Birthday Problem to find out more.)

We are also not always good at reasoning about relative probabilities, or at least that is how it seems. One of the challenging, if not outright controversial, results from the work of Amos Tversky and Daniel Kahneman (who received the 2002 Nobel Memorial Prize in Economics), is known as the conjunction fallacy. This is a formal fallacy (reasoning that is logically flawed), in which a more specific proposition (e.g. two conditions apply in conjunction — my friend has a sister named Anne) is deemed more likely than a more general one (e.g. just one condition is met — my friend has a sister).

The canonical example which Tversky and Kahneman gave us is that of Linda, a hypothetical young woman, about whom we are told some facts. Then we are asked what is more likely: (a) she is a bank teller, or (b) she is a bank teller and active as a feminist.

Probability theory is clear: (b) is more specific, and can therefore at best be as likely as (a) (in case all female bank tellers are feminists). Yet Tversky and Kahneman found 85% of the subjects in their study concluded that (b) was more likely than (a).

Pub car park in the rain with a few cars, among which a white Mini
Pub car park in the rain with a few cars, among which a white Mini
Careful with that white car, Eugene! (photo: Ian CC BY)

Critics of behavioural economics, like Gerd Gigerenzer, say this is not a mistake, because people don’t necessarily use probabilities in a mathematical way. Given the information we received about Linda, it is plausible that she is a feminist, and we often interpret this as a heuristic that predicts likelihood. We can easily make up other examples. Say, Eugene is just leaving the pub, about to drive home. He had one pint of lager with his workmates and the alcohol level in his blood is just above the legal limit. What is more likely: (a) he will drive home in a white car, or (b) he will drive home in a white car and scrape the wall as he parks up on the drive at his house? Even if you have been prompted by what you’ve just read to give the correct answer, I suspect you may feel the pull of your intuition to give the wrong, plausible answer, that just feels right.

But the fallacy does not just occur in contexts where colloquial language may blur the rigid rules of probabilistic reasoning. In another experiment, Tversky and Kahneman asked subjects to consider a regular die with four green faces and two red ones, that would be rolled 20 times, with the sequence of green (G) and red (R) outcomes recorded. They could choose one of three short sequences, and if their choice occurred in the succession of 20 rolls, they would win $25 — so they had an incentive to get it right.

The three options were: (a) RGRRR, (b) GRGRRR, and © GRRRRR. Sharp-eyed readers will notice that sequence (a) is contained in sequence (b), and hence has a higher probability of occurring. 65% of their subjects chose the less likely option (b), despite the fact that it approximately halved their chance of winning $25 compared to option (a).

Still, the fact that money was at stake may explain why fewer people opted for the less likely option with the die roll than the 85% who gave the ‘wrong’ answer about Linda. Gary Charness, Edi Karni and Dan Levin, three economists, investigated whether incentives could also mitigate this original Linda problem. The control group in their experiment was not offered an incentive, but would receive $2 for answering the question about Linda given above. The treatment group was told there was a correct answer, and that anyone who chose it would receive a $4 bonus. They ran the experiment in three modes: with participants as singles, in pairs, and in trios.

The researchers did not manage to replicate Tversky’s and Kahneman’s 85% ‘error’ rate, but still found that 58% of the ‘singles’ control group gave the wrong answer. However, with incentives, that proportion dropped to just 33%. And that was not all. When working together, only 48% (duos) and 26% (trios) chose the ‘wrong’ answer in the control group. With incentives, that proportion reduced to respectively 13% and 10%.

So, the violation of the conjunction rule did not disappear entirely — one third of the participants in the singles mode left $4 on the table by maintaining it was more probable that Linda was a feminist bank teller than that she was a bank teller. The effect of collaboration, on the other hand, is striking.

The problem continues to exercise scientists, though: it remains somewhat mysterious that we seem to violate a relatively simple, basic rule, and thus may make poor decisions. Recently, two philosophers at the university of Oxford, Kevin Dorst and Matt Mandelkern, explored the more general phenomenon of guessing — responding to a question when they are not certain of the answer, and they came up with a novel explanation for the conjunction fallacy.

Their hypothetical character is Latif, a young man who has been accepted at all four top law schools in the US. We know nothing about his preferences, but earlier similar applicants have chosen according to a given distribution: Yale (38%), Stanford (30%), Harvard (20%) and NYU (12%). What answer would people give to the question where Latif will end up studying law? Alongside, obviously, accuracy, the researchers identify a second feature of a possible answer: “ informativity” (how specific it is). For example, answering “Latif will go to Yale, to Stanford, to Harvard, or to NYU” is 100% accurate, but it is not informative, as we already know he will choose one of these four.

Diagram showing scales with a pie chart on each side, indicating ‘informative’ and ‘accurate’
Diagram showing scales with a pie chart on each side, indicating ‘informative’ and ‘accurate’
Accuracy is not everything (image: your correspondent)

Dorst and Mandelkern argue that people make a trade-off between likelihood and informativity. These two facets are in direct competition with each other: a more specific answer is more likely to be wrong. It would, for example, be more informative (but less likely) to say “Latif will go to Yale”, and more likely (but less informative) to say “Latif will go to Yale or to Stanford”. When making a guess, people will seek to optimize the overall value of the answer, which combines accuracy and informativity. It is this that can give rise to a preferred answer that is not the most accurate one, but that makes up for that by being much more informative. (Do read the whole paper to see how they develop the idea further, or check out the authors’ blogpost on this aspect of the paper.)

This insight would seem to be a plausible explanation for what hitherto appeared to be a fallacy: if accuracy is not the only thing that matters, then judging a decision on accuracy alone is missing part of the picture. It is also a beautiful illustration of the immense richness of human thought.

But has, with this contribution, the last word on the conjunction fallacy been said? I doubt it. What do you think?

Is it more likely (a) that the debate around this controversy will run on, or (b) that the debate will run on and that Dorst and Mandelkern’s trade-off insight is confirmed as a robust explanation?

Originally published at http://koenfucius.wordpress.com on July 31, 2020.

Thanks for reading this article — I hope you enjoyed it. Please do share it far and wide — there are handy Twitter and Facebook buttons nearby, and you can click here to share it via LinkedIn, or simply copy and paste this link. See all my other articles featuring observations of human behaviour (I publish one every Friday) here. Thank you!

Written by

Accidental behavioural economist in search of wisdom. Uses insights from (behavioural) economics in organization development. On Twitter as @koenfucius

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store