You don’t need to be a superforecaster (nor even be making any forecasts) to benefit from two of their key skills
There are many possible reasons why we might engage in thinking. At this precise moment, you may be thinking, what on earth is his point with a sentence like this, for example. Or perhaps you are thinking about what you might be doing instead of reading this piece, and evaluating which is preferable. At other times, you may be thinking about more momentous matters, like changing jobs or a new romantic relationship, or about how best to save for your retirement.
A common characteristic across much of our thinking is that we are trying to resolve uncertainty. This is definitely the case when it concerns speculating about what might happen in the future, an activity also known as forecasting. Some people do this for a living: they tell us what the weather will be, how the economy will perform, or who will win the next election. Others do it as part of another day job in which they claim some expertise. Over the past several months, we have had many opportunities to hear forecasts from a wide range of such experts and commentators about the expected number of COVID-19 deaths, the number of hospital admissions, the utilization rate of intensive care units, or the effects on the economy or education of lockdown measures.
And of course, we do so ourselves. Aside from, privately, sometimes also venturing into making predictions about the weather, the economy, politics or pandemic-related stuff, we may proclaim a view on which football team will end up topping the league at the end of the season, or who will be victorious in Strictly Come Dancing (or its equivalent in your country); we may guess when we will get that promotion at work, how long a newlywed celebrity’s marriage will last, and so on.
Most of us — including experts and commentators — are remarkably poor at it, and we rarely get better over time. But there is one category of people who excel at systematically producing relatively accurate predictions: the so-called superforecasters. The term is associated with the Good Judgment Project, a programme established in 2011 at the university of Pennsylvania by psychologist (and co-author of Superforecasting) Philip Tetlock and colleagues, to research the personality and behavioural traits that predict a person’s aptitude at forecasting. As a participant in a Good Judgment Open tournament earlier this year, I had a chance of experiencing first hand how superforecasting works, making my own forecasts regarding the COVID-19 pandemic as it started gaining ground worldwide. (I was very pleased, and also a little proud, to end in the top-10.)
One way in which superforecasters differ from most other people is that they think in quantified probabilities — a concept Annie Duke describes in detail in Thinking in Bets. Rather than answering the question “How many people will have died of COVID-19 in Japan by 15 July 2020?”, they answer the question “What is the probability that the number of COVID-19 deaths in Japan on 15 July 2020 will be between 800 and 1000?” There are several such ‘bins’ (e.g. less than 500, 500–799, and more than 1000), and for each bin a probability is estimated, so that the total adds up to 100%.
This is a very different approach to forecasting than “the number of deaths could be as much as X”, or “I expect the number of deaths to be between Y and Z”, often with no trace of a due date. Having to quantify the confidence in your estimate compels you, as an aspiring superforecaster, to really consider your answers, and to make your assumptions more explicit.
No commitment to an estimate
But for me, the fundamental significance of forecasting in this manner is that you do not feel a commitment to a particular outcome. When commentators and experts make forecasts, citing just one specific outcome creates such an attachment (even if they specify a range, or a maximum or minimum). Such a forecast feels much like you’ve invested in your forecast. And that makes it hard to revise it without experiencing this as a loss, and losing face in others’ (and indeed your own) eyes. It is akin to the sunk cost fallacy — you stick with your forecast, because you made that investment. That is a first pitfall.
Superforecasters, who feel no attachment to any one outcome, have no problem taking into account new evidence and updating their estimates accordingly. They are confident that their forecast becomes more accurate precisely by doing so. The temptation to gain attention by making some sensationalist forecast that might lure politicians, commentators and experts alongside the rest of us (I am no stranger to it either, I have to confess) is completely absent. Revising your forecast does not produce the slightest sense that you’re wrong and need to backtrack, let alone that you’re losing face. All you are concerned with is making your new estimate more accurate.
Using hindsight to learn
When the due date arrives, superforecasters are not just interested in how close their latest forecast is to the actual outcome, but also how their estimate evolved towards it over time. Was there a sudden jump? Were they late in taking new evidence into account? Did they overestimate a particular factor? Analysing this allows them to sharpen their skills in collecting, interpreting, and weighing evidence, and in managing the biases they share with all of us.
And here we have a second pitfall ordinary mortals may overlook: hindsight bias. When they see their forecast is likely to be off — something that often becomes apparent well before the due date (if there is one) — they tend to develop explanations (if not excuses) why this is so. If they could have known this or that, if something unexpected had not happened or some unanticipated intervention had not taken place or whatever, they’d still be spot on. This enables them, with the benefit of this new hindsight, to retrospectively tweak their original forecast without having to concede it was wrong, and maintain the conviction they were actually pretty close all along.
If they are lucky and their original forecast turns out close enough, then that result is naturally a vindication of their superior intuition, rather than of the process they followed — “resulting”, Annie Duke calls it in her book. Thus amateur, expert and commentator alike, regardless whether their forecast was accurate or not, can all claim their insight and perspicacity were spot on, and reassert their credibility. And, unlike the superforecasters, they learn nothing.
These two pitfalls are not limited to forecasting, but apply in any situation where we seek to resolve uncertainty and make a decision. Should we replace our car or keep it for another year? Should we hire the decorator our brother-in-law recommends?
Here too, we have a choice. We can either go with a premature conclusion based on gut feel and stick to it, and either give ourselves a pat on the back when we happen to be right, or alternatively, with the benefit of hindsight, construct a backstory that explains why we would have made the right choice, were it not for things we could not have known or expected.
Or we could follow the superforecasters’ process. We could systematically consider the options, assign probabilities where appropriate, seek out evidence and update our judgement accordingly. And we could review the process every time we apply it, and learn both from what we got right and what we got wrong.
One of those two approaches gives us thinking superpowers. You know which one.
Originally published at http://koenfucius.wordpress.com on December 11, 2020.
Thank you for reading this article — I hope you enjoyed it. Please do share it far and wide — there are handy Twitter and Facebook buttons nearby, and you can click here to share it via LinkedIn, or simply copy and paste this link. See all my other articles featuring observations of human behaviour (I publish one every Friday) here. Thanks!