Dr Rajiv Chandegra
07.03.2026·Systems, Policy

Part of series: Incerto·Part 2 of 6

Navigate:
A towering ocean wave against a dark sky

The Average is a Lie

In medical school, they teach you to think in averages. Normal ranges. Reference intervals. The "average patient." You learn to interpret blood results against population means, to spot deviations from the norm, and to feel reassured when everything falls "within normal limits."

I carried this training into my early clinical rotations like a talisman. Numbers in the normal range meant safety. Deviation meant danger. The framework was clean, logical, and deeply comforting.

Then I met Mrs Kaur.

She was seventy-three, brought in by her daughter who said she "just wasn't right." No chest pain. No shortness of breath. Blood pressure: normal. Heart rate: normal. Bloods came back textbook. Lactate: fine. White cells: fine. CRP: barely raised. Every metric we had said this woman was well. I wrote up my notes, satisfied that there was nothing to find. The registrar walked past, paused at the bedside, watched Mrs Kaur for perhaps thirty seconds, and said: "Get a CT angio. Now." I remember thinking it was a waste of scanner time. The CT revealed a dissecting thoracic aortic aneurysm. She was in theatre within the hour.

The average told me she was fine. She was dying. Every single metric was "within normal limits," and every single metric was irrelevant to the thing that was actually killing her. From that day, I learned to distrust the average and pay attention to the tails. The extremes are where the consequential events live, and our obsession with the centre of the distribution blinds us to exactly the outcomes that matter most.


Two worlds

Nassim Nicholas Taleb draws a line through reality and divides it into two fundamentally different domains.[1] He calls them Mediocristan and Extremistan.

In Mediocristan, the average is meaningful. Extremes are bounded. No single observation can dramatically change the aggregate. If you measure the height of a thousand people and then add the tallest person who ever lived, the average barely moves. The bell curve describes this world accurately because the underlying processes are constrained by physical or biological limits.

In Extremistan, the average is a fiction. A single observation can dominate the total. If you measure the wealth of a thousand people and then add Jeff Bezos to the room, he alone accounts for more than everyone else combined. The bell curve does not apply here because the underlying processes have no natural ceiling.

The distinction matters because it determines whether your models, your plans, and your institutions will work or catastrophically fail.

FeatureMediocristanExtremistan
ExampleHeight, weight, blood pressureWealth, pandemic deaths, book sales
DistributionGaussian (bell curve)Power law (fat-tailed)
Effect of extremesNegligibleDominant
Average isMeaningful and stableMisleading and unstable
PredictabilityHigh within rangesLow for individual events
Planning approachOptimise around the meanPrepare for the extremes
Healthcare exampleDaily calorie intakeHospital bed demand during a pandemic

The tragedy is that most of our institutions were designed for Mediocristan. They assume the future will resemble the average of the past. When it does not, and in consequential domains it reliably does not, they break.


The turkey problem

Taleb's most vivid illustration of this blindness is the turkey problem.[2]

A turkey is fed every day by a farmer. Each day, the turkey's confidence in the farmer's benevolence grows. Day after day, the evidence accumulates: this farmer cares for me, feeds me, shelters me. By day 999, the turkey has never been more certain of anything. On day 1,000, just before Thanksgiving, the turkey's model of reality is revised. Permanently.

The turkey's error was not stupidity. The turkey was doing exactly what most of our risk models do: extrapolating from past data to predict the future. Each day of feeding was a data point. The trend was clear, consistent, and statistically robust. The confidence interval was tight.

The problem is that the very data that built the turkey's confidence was also building its vulnerability. The absence of a catastrophe was not evidence that catastrophe was impossible. It was merely evidence that it had not happened yet.

Healthcare planning fell into precisely this trap. For a decade before COVID-19, the NHS experienced manageable winters. Demand was predictable within narrow bands. Bed occupancy ran at 90-95%, leaving minimal surge capacity. Models showed this was efficient. And it was, right up until the moment it was not.

The pandemic was not a surprise in the sense that no one imagined it. Epidemiologists had been warning about exactly this scenario for years. It was a surprise in the sense that the systems we built behaved as though it could not happen. We planned for the average winter. We got an extreme one. The turkey got Thanksgiving.

"The central argument of this book concerns our blindness with respect to randomness, particularly the large deviations."[3]


Why healthcare lives in Extremistan

If you spend time in clinical medicine, you begin to notice a pattern. The things that matter most follow fat-tailed distributions, not bell curves.

Pandemic deaths follow power laws. COVID-19 did not distribute its mortality evenly. A small number of regions, age groups, and comorbidity profiles accounted for a vastly disproportionate share of deaths. Planning based on average mortality rates across populations was worse than useless; it actively misallocated resources.

Hospital bed demand is fat-tailed. Most days, demand is predictable. Then a heatwave, a norovirus outbreak, or a respiratory virus season arrives and demand spikes far beyond anything the historical average would suggest. The NHS operates on average-based capacity planning, which is why "winter crises" are not anomalies but predictable consequences of building a system around the mean.

Drug side effects are fat-tailed. Most patients tolerate a medication well. A small number experience severe adverse reactions. The average side-effect profile, the one reported in clinical trials, conceals the extreme tail where the serious harm concentrates. Rare adverse events, by definition excluded from trials powered for average effects, collectively cause enormous suffering.

Rare diseases collectively affect millions. Each individual rare disease is, well, rare. But there are over 7,000 known rare diseases, and collectively they affect roughly 300 million people worldwide.[4] The "average" patient does not have a rare disease. But the tail of the distribution, the thousands of conditions each affecting a small number of people, is where a vast amount of unmet medical need resides.

Healthcare is not a Mediocristan domain with occasional Extremistan surprises. It is fundamentally an Extremistan domain that we insist on treating as Mediocristan.


The problem with models

"The problem is not that we cannot compute, it is that we do not know what to compute."[5]

Standard risk models assume Gaussian distributions because the mathematics is tractable. Value at Risk models in finance, capacity planning models in healthcare, actuarial tables in insurance: all of them rely on the bell curve assumption because it makes the calculations manageable.

The problem is not computational. We have the tools to model fat-tailed distributions. The problem is institutional. Organisations are structured around the assumption that the future will resemble the recent past, that extreme events are so rare they can be ignored, and that optimising for the average is the same as preparing for reality.

NHS bed management provides a stark example. Trusts plan capacity based on historical average occupancy. A target of 85% occupancy sounds prudent; it leaves a 15% buffer. But when demand follows a fat-tailed distribution, the spikes exceed that buffer regularly. The "once in a generation" surge happens every few years. The model says it should not. Reality does not consult the model.

Staffing models make the same error. Workforce planning based on average patient-to-nurse ratios works on a typical Tuesday. It collapses on a Friday night in winter when three ambulances arrive simultaneously with major trauma cases. The average ratio is irrelevant at the moment it matters most.

The deeper issue is that optimising for the average actively degrades your ability to handle extremes. Every efficiency gain that removes "unnecessary" capacity, every cost reduction that eliminates "redundant" staff, every streamlining that removes "excess" inventory is a bet that the future will stay within normal limits. It is a bet against the tails. And in Extremistan, the tails always win eventually.


What to do about it

You cannot predict Black Swans. That is the whole point. If you could predict them, they would not be Black Swans. But you can build systems that survive them, and this connects directly to the concept of antifragility from the first essay in this series.

Prepare for extremes, not averages. Design systems with surge capacity built in. Yes, this means accepting apparent "waste" during normal operations. That spare bed, that extra nurse on the rota, that stockpile of PPE gathering dust: these are not inefficiencies. They are insurance against fat tails. The cost of maintaining them is trivial compared to the cost of not having them when you need them.

Over-provision rather than optimise. Optimisation is a Mediocristan strategy. In Extremistan, the optimal system is fragile by definition, because it has removed every buffer, every redundancy, every margin that might absorb a shock. The systems paradigm teaches us that slack in a system is not waste; it is what allows the system to adapt.

Stress-test against the "impossible." Run your scenarios not against historical averages but against outcomes your models say cannot happen. What if demand doubles overnight? What if your supply chain fails simultaneously across three inputs? What if the "hundred-year event" happens twice in a decade? If your system cannot survive these scenarios, it is not robust. It is merely lucky.

Use via negativa. Rather than trying to predict what will go wrong, remove the things that make you fragile. Eliminate single points of failure. Reduce concentration risk. Build in optionality so that when the unexpected arrives, you have room to manoeuvre. Subtraction is more reliable than prediction.

Accept that you will be wrong. The goal is not to be right about what will happen. The goal is to be positioned so that being wrong does not destroy you. This is the fundamental shift from prediction-based planning to trade-offs thinking: acknowledging that the future is uncertain and building accordingly.

"In Extremistan, inequalities are such that one single observation can disproportionately impact the aggregate."[6]


The lie we keep telling

The average is not merely inaccurate. It is actively dangerous when applied to fat-tailed domains. It gives us false confidence. It encourages us to optimise away the very buffers that protect us. It makes the catastrophe, when it arrives, feel like a surprise rather than a consequence.

Mrs Kaur's blood results were average. She was dying. The NHS's winter planning was average. COVID broke it. The turkey's 999 days of data were average. Day 1,000 was not.

The world we built, our institutions, our models, our plans, was designed for the centre of the distribution. But we live in the tails. The sooner we accept that, the sooner we can stop being the turkey.


Related reading: