How to Be Wrong and Still Win
It was half past two in the morning during an acute medical take in my second year of training. A woman in her seventies had arrived by ambulance: fever of 39.4, rigors, blood pressure dropping, and a confusion that her daughter said was nothing like her. She had been well that morning. Now she was barely responsive.
I stood at the foot of the bed running through the differential in my head. It could be a urinary tract infection. Elderly women get them all the time. Confusion from a UTI in an older patient is practically a cliche of acute medicine. Or it could be full-blown sepsis, the kind that kills within hours if you hesitate. The blood results were not back yet. The urine dip was equivocal. I did not know which it was.
The medical registrar, a woman called Priya who had the calm authority of someone who had seen hundreds of nights like this, walked over and looked at the obs chart for about three seconds. "Start IV co-amoxiclav and gentamicin now," she said. "Two litres of saline. Blood cultures before the antibiotics go up. Catheterise her and send a CSU. We're treating for sepsis until we know it isn't."
I hesitated. "But it might just be a UTI. Should we wait for the bloods?"
She looked at me properly then. "Think about it this way. If it's a UTI and we give her broad-spectrum antibiotics for twenty-four hours, what's the cost? She gets antibiotics she didn't strictly need. Minimal harm. Now, if it's sepsis and we wait two hours for bloods to confirm it, what's the cost?" She paused. "She could be dead by the time you get the result."
I didn't have a name for what she was teaching me that night. I learned it years later from Taleb. It was the barbell strategy: simultaneously assume the worst and take protective action, while investigating the benign explanation. The asymmetry of the payoff tells you what to do, even when you don't know the answer. Especially when you don't know the answer.
The prediction trap
We are obsessed with prediction. Five-year business plans. Economic forecasts. Strategic roadmaps with quarterly milestones. Revenue projections. Market sizing exercises with three decimal places of false precision.
The problem is not that prediction is difficult. The problem is that prediction is systematically unreliable in complex systems. Weather forecasts degrade rapidly beyond five days. Economic models failed to anticipate 2008. Pandemic preparedness plans assumed a flu, not a coronavirus. The record of expert prediction in complex domains is, to put it charitably, poor.[1]
Taleb's insight is deceptively simple: stop trying to predict and start positioning. The question is not "what will happen?" It is "how do I benefit regardless of what happens?" This is a fundamentally different orientation. It shifts the focus from knowledge (which is unreliable) to structure (which you can control).
Priya was not predicting whether the patient had sepsis. She was positioning for asymmetric payoffs. The cost of being wrong about sepsis was trivial. The cost of being wrong about a UTI was catastrophic. The asymmetry told her everything she needed to know.
Optionality explained
An option is the right, but not the obligation, to do something. In finance, options have a specific technical meaning. But Taleb extends the concept far beyond finance into a general principle for navigating uncertainty.
Options are valuable precisely because they have capped downside and open-ended upside. You know the maximum you can lose (the cost of acquiring the option). You do not know the maximum you can gain.
Consider the simplest example. Someone gives you a free ticket to a concert by a band you have never heard of. If they are brilliant, you have discovered something you love. If they are awful, you leave after twenty minutes. The downside is capped at twenty minutes of your evening. The upside is open-ended. You do not need to predict whether the concert will be good. The structure of the payoff does the work for you.
"If you 'have optionality,' you don't have much need for what is commonly called intelligence, knowledge, insight, skills, and these complicated things that take place in our brain cells."
This sounds almost offensive to anyone who values expertise. But Taleb is not arguing against intelligence. He is arguing that in environments of deep uncertainty, positioning matters more than prediction. The person with many cheap options will outperform the person with one brilliant forecast, because the forecaster needs to be right, and the option-holder does not.[2]
Convexity: why being wrong can still work
The deeper principle beneath optionality is convexity. If your payoff function is convex, you gain more from being right than you lose from being wrong. This means you can be wrong most of the time and still come out ahead.
Tinkering and experimentation work because each experiment is a cheap option. Most experiments fail. That is fine. The ones that succeed pay for all the failures many times over. The mathematics are simple: if you run ten experiments, each costing one unit, and nine fail completely, but the tenth returns fifty units, you have a net gain of forty. You were wrong 90% of the time. You still won.
The concave payoff is the mirror image, and it is lethal. Small, steady gains punctuated by catastrophic loss. This is the profile of selling insurance without reserves, of running a hospital at 99% bed occupancy, of building a business model that works perfectly until one assumption breaks. You look clever right up until the moment you are ruined.
The practical question for any decision is: what shape is my payoff? If the shape is convex, act. If concave, stop. You do not need to predict the outcome. The geometry tells you enough.[3]
The barbell strategy
The barbell strategy is Taleb's prescription for operationalising optionality. The name comes from its shape: weight at both extremes, nothing in the middle.
On one end: extreme safety. Protect yourself absolutely against catastrophic downside. On the other end: extreme experimentation. Take many small, aggressive bets where the upside is uncapped. In the middle: nothing. The middle is where you get moderate returns for moderate risk. It sounds sensible. It is the most dangerous place to be.
"The strategy is, then, to tinker as much as possible and try to collect as many Black Swan opportunities as you can."
The moderate middle feels comfortable because it avoids extremes. But it also avoids the protective floor that the safe end provides and the explosive upside that the experimental end offers. You end up with neither safety nor opportunity. You get the illusion of balance.[4]
The barbell is not reckless. It is the opposite of reckless. The safe end guarantees survival. The experimental end provides exposure to large gains. Together, they create a structure where you cannot be ruined, but you can be transformed.
Optionality in healthcare innovation
This is where it gets personal. I have spent years watching healthcare innovation follow the moderate middle path: moderate investment, moderate ambition, moderate risk. This is precisely wrong.
The balanced, cautious approach to healthcare innovation produces a predictable result: products that are too timid to change anything, launched too slowly to matter, priced too conservatively to sustain the company. The moderate middle in healthcare innovation is a graveyard of reasonable ideas that never reached anyone.
The barbell applied to healthcare innovation looks different.
| Safe end (non-negotiable) | Experimental end (actively encouraged) |
|---|---|
| Clinical evidence must be rigorous | Business models can be wild |
| Patient safety is absolute | Market entry approaches should be diverse |
| Data security is uncompromising | Product iterations should be rapid and cheap |
| Regulatory compliance is complete | Partnerships can be unconventional |
| Clinical governance is robust | Delivery mechanisms can be experimental |
Be fanatically rigorous about clinical safety. This is your floor. It is not up for debate, not open to "moving fast and breaking things," not subject to the pressures of a funding round. This is the safe end of the barbell, and it must be inviolate. As I wrote in Move Fast and Break Things? Not in Healthcare, the startup instinct to iterate recklessly has no place in clinical safety.
Then be wildly experimental about everything else. Try ten business models. Enter three markets simultaneously with different approaches. Build partnerships that would make a conventional strategist uncomfortable. Run cheap experiments constantly. Most will fail. The ones that work will pay for everything.
This is the translation problem in practice. The companies that successfully cross the India-UK corridor are not the ones with the most careful plans. They are the ones with the most options: multiple market approaches, multiple partnership models, multiple regulatory strategies. They are fanatically safe on the clinical end and fanatically experimental on the commercial end.
Tinkering over planning
Taleb argues that most great discoveries came from tinkering, not from grand plans. Penicillin was discovered because Fleming noticed mould killing bacteria on a petri dish he had left out by accident. X-rays were discovered during experiments with cathode rays. The internet began as a military communications project with no commercial intent. Aspirin, one of the most widely used drugs in history, was derived from folk medicine: willow bark tea.
"The key is that your assessment doesn't need to be correct, only your payoff."
The lesson is not that planning is useless. The lesson is that planning is overvalued in environments of deep uncertainty. When you cannot predict what will work, the rational strategy is to create many cheap experiments and let reality tell you. This is not the absence of strategy. It is a strategy specifically designed for uncertainty.[5]
In healthcare innovation, the implication is clear: create many cheap experiments, not one expensive plan. Build a prototype in a week and test it with five clinicians, rather than spending six months on a business case. Launch a pilot in one clinic before designing a national rollout. Try three pricing models simultaneously rather than agonising over the "right" one.
Each experiment is an option. Each option has capped downside (the cost of running it) and open-ended upside (the possibility that it reveals something transformative). You do not need to predict which experiment will succeed. You need to ensure that the structure of your portfolio is convex: many small bets, any one of which could change everything.
The asymmetry that matters
I think about Priya often. Not because she was dramatic or inspiring in the way that medical television demands. She was quiet, efficient, and slightly impatient with my hesitation. But she understood something that took me years to articulate.
She was not smarter than me about whether that patient had sepsis. She might have been wrong. She probably was wrong about plenty of patients over her career. But she had arranged her decision-making so that being wrong was cheap and being right was lifesaving. The shape of her payoff made her accuracy almost irrelevant.
That is the deepest lesson of optionality. It is not about being right. It is about being positioned so that rightness pays vastly more than wrongness costs. It is about building a life, a career, an organisation, a system where the geometry of your decisions works in your favour, regardless of what the future holds.
You do not need to predict the future. You need to structure your choices so the future works for you either way.
Related reading:
- Move Fast and Break Things? Not in Healthcare - Why startup velocity and healthcare caution create productive tension
- The Gateway: The Translation Problem - Why healthcare innovation rarely survives the journey between contexts
- Incerto: The Average is a Lie - Why our institutions were built for a bell curve world that does not exist