The meaning(lessness) of a number
Numbers can misinform as well as inform — even if they are correct, because they do not (and cannot) carry sometimes crucial context
For many years, the product evaluations of consumer magazines like Which? in the UK, and countless equivalents in other countries, have been identifying “Best Buys”: the products that performed best in the tests. But such a binary distinction (a product either is, or is not, a Best Buy) is not necessarily very helpful.
One issue with it is that it doesn’t tell us whether a product that failed to receive the coveted badge only just failed to make the grade, or whether it was mediocre across the board. Another concern is that the boundary between Best and Not-Best is arbitrary, and a further one is that the rating inevitably has to combine many criteria. For, say, a dishwasher, perhaps the energy consumption, the duration of a cycle, the noise level, the ease of loading and unloading, and the capacity would be meaningful yardsticks along which to compare different models. But what relative weight do each of these criteria receive in the overall evaluation? And does this reflect how we personally would approach the performance of the appliance?
The Best Buy concept is as popular as ever, but it is now complemented by tables in which the relative performance against the chosen criteria is chosen, and by an aggregate score (out of 100). This improvement helps us see whether a product is just below the (still arbitrary) Best Buy threshold or way down. But we still cannot tell whether the relative scores of two products reflect what we find important. If we must have a very quiet dishwasher because we spend a lot of time in the kitchen, but because we’d never fill it up the capacity is of little importance to us, a lower ranked appliance may well be the Best Buy for us.
A score out of 100 looks suitable as a measure, but appearances may be deceptive. Its quantitative nature suggests it is a good guide for decision-making, but that is illusory.
Index pointing the wrong way
Such idiosyncratic, untransparent numbers are not just a feature of product evaluations. In a recent blogpost, Branko Milanovic, a prominent economist, took a look at a report entitled “Global Health Security Index”, which aims to establish individual countries’ readiness and capability of handling infections disease outbreaks. Much like a Which? report on lawnmowers or mayonnaise, it establishes categories against which a country’s preparedness is measured. There are six in total: prevention, detection and reporting, rapid response, health system, compliance with international norms, and risk environment. These in turn are constructed from 34 indicators and 85 subindicators. All of this was combined into one GHS index — a score (you guessed it) out of 100.
The report was published in October 2019, just a few months before the COVID-19 pandemic rapidly gained hold of the world. Curious about the accuracy of the report, Dr Milanovic undertook to compare the index with the actual performance in handling the pandemic of selected countries. Not unreasonably, he chose to look at the number of COVID-19 deaths per million population.
His observations are quite remarkable. The GHS index appears to end up predicting the inverse of what it set out to do: the top-3 countries in the report — the US, the UK and the Netherlands with scores of respectively 83.5, 77.9 and 75.6 — are among the worst when it comes to COVID-deaths relative to population (10th, 4th and 38th from bottom, with nearly 1400, more than 1600, and more than 800 deaths per million). Conversely, countries with a low GHS index did much better in practice. Vietnam, for example, is 4th best in the list of COVID-19 deaths with 0.36 deaths per million, but ranked only 50th in the GHS index chart. Thailand and Sweden appear next to each other on 6th and 7th place in the GHS index ranking but the latter recorded more than 1000 times more deaths per million than the former. Belgium, 19th in the GHS list with a score of 61.0, is second worst (behind San Marino) in the COVID deaths ranking with more than 1800 deaths per million.
The GHS index clearly fails to live up to its stated purpose. And the discrepancy between prediction and reality not only illustrates how such a single number, aggregated from six categories, 34 indicators and 85 subindicators, can be meaningless. It also shows that, when such a number is patently wrong, we are unable to tell why it this is the case.
Efficacy is not everything (and is hardly anything)
As we are talking about COVID-19, there is another example of a single number — again presented as a score out of 100 — that is worth mentioning: the efficacy of a vaccine. The media happily report and discuss this headline number, complete with decimal points (where available). Never mind that few people actually understand what efficacy means (it is the percentage reduction in disease incidence in a vaccinated group, compared to an unvaccinated group under optimal conditions, e.g., a randomized controlled trial — and not, for example, the percentage of people that did not get ill after receiving the vaccine).
The issue is that this number says nothing about the effectiveness of a vaccine — its ability to influence outcomes in the real world. Say vaccine A has an efficacy of 70%, while vaccine B has an efficacy of 90%. Uninformed intuition would suggest that, given a choice, we should opt for vaccine B, even to the point of rejecting vaccine A. But we don’t know what happened to the people who, despite being inoculated with either vaccine, still develop the disease. How ill did they get? Did they need to be taken to hospital? Did they need intensive care? And most importantly, did they live or die? Everyone who got ill after receiving vaccine A may well have had very mild symptoms, while some of those who got the disease after receiving vaccine B may have required hospitalization or have died.
The efficacy figure says exactly nothing about this. The good news is that the vaccines that have been widely approved so far all perform excellently with very few or no severe COVID-19 cases in the treatment group, and no deaths from either the virus or the vaccine. This means that, no matter which vaccine you receive, it is highly unlikely you will get seriously ill and will have to go to hospital, and even less likely that you will die.
The bad news is that the inappropriate focus on this one headline figure — the efficacy — by the media, by politicians and their advisers, and by the general public, is influencing key decisions, and not for better. The suggestion that there is a big difference in what vaccines really mean in practice, that a vaccine with a lower efficacy is somehow a lot worse than one with a higher efficacy is feeding vaccination hesitancy — who wants to receive an ‘inferior’ vaccine? And that is influencing political leaders and policy makers who don’t want to be seen to be pushing ‘inferior’ vaccines. The inevitable result is that it will take longer to get the COVID-19 pandemic under control, and that more people will die.
Sometimes, numbers are not just meaningless. They can misdirect us and guide us towards poor decisions. It is up to us to question how significant a number truly is, especially if it is bandied around as authoritative indications of something important.
Originally published at http://koenfucius.wordpress.com on February 5, 2021.
Thanks for reading this article — I hope you enjoyed it. Please do share it far and wide — there are handy Twitter and Facebook buttons nearby, and you can click here to share it via LinkedIn, or simply copy and paste this link. See all my other articles featuring observations of human behaviour (I publish one every Friday) here. Thank you!