Rodin's The Thinker thinking about various alternative realities in which he theoretically could have existed.

How Counterfactual Thinking has Shaped our World

The concept of causality depends on counterfactual thinking. Counterfactuals, which are essentially alternative realities, don’t exist outside our imaginations, and arguably cause us more trouble than good. But the more humanity moulds the world to its needs and standardises behaviour, the more traction causal thinking may gain as we get better at verifying counterfactual statements by emulating alternative realities.

Judea Pearl’s though-provoking The Book of Why posits that imagined alternative realities – counterfactuals – are central to our understanding of causality. If you ask “Why did Y happen?” your answer might be along the lines of “If X hadn’t happened, then Y wouldn’t have either”. You can’t make this statement without reference to an imagined world – a world that branches off from reality at the moment you make the imaginary change, then continues on an unknown, and unknowable, trajectory.

Can we really predict alternative futures?

If you asked most people whether it’s possible to predict how alternative worlds would have unfolded, I suspect they’d agree that it’s not. We can’t predict our own future, so why should we believe we can predict an alternative future? Yet every day, we intuitively rely on counterfactuals to make more or less plausible statements: “If you had taken the rubbish out, the kitchen wouldn’t smell so bad,” or “If Christianity hadn’t existed, then liberal values wouldn’t have dominated the world.”

We have lots of experience of situations where someone has left the rubbish in a kitchen for too long, and it’s not a particularly high level of abstraction to compare the current situation with many others that are similar (but not the same). We can put together a pretty good collage from scraps of similar experience, then compare our counterfactual statement to this. But can we really compare our other counterfactual – in which Christianity didn’t emerge – with anything? We’d need to predict two millennia of alternative future.

Every day, we make perhaps hundreds or thousands of gratuitous causal statements – blame, post-rationalisation, speculation, conjecture, bald assertions, claims of credit, denials of guilt. Add to that the tricks our minds play on us in the form of anxieties, unnecessary regret, jealousy and so forth. Are even 0.1% of these verified or verifiable? There’s probably a social element in determining which counterfactuals are deemed plausible and which are beyond the fringe, which helps us uphold our pretense to be logical, fact-based thinkers.

Somehow, we hold onto the idea that causation is a law of nature. Paradoxically, we can’t imagine a world in which our imagined alternative worlds are illusions, even though it is obviously the world we live in.

We’re changing our world to make it more amenable to causal thinking

Yet despite the exceedingly high proportion of duff counterfactuals, in recent centuries we seem to be getting just a little better at applying counterfactual, and thus causal, thinking. Not because our brains are getting better at constructing worlds that accurately predict real or imagined futures. It’s because today’s world is better suited to causal thinking.

If we return to the example above, it seemed patently obvious that not taking the rubbish out would cause the kitchen to smell. But categories such as “kitchen” and “rubbish” would have meant nothing to our ancestors thousands of years ago: kitchens and refuse collection didn’t exist. Not merely in the sense that they hadn’t been invented, but that people’s habits weren’t standardised enough to warrant the categories “kitchen” and “rubbish”. Likewise, the basic categories “inside” and “outside” might not always have been clear cut but instead a multidimensional space on axes of sheltered/not sheltered, warm/not warm, dry/not dry and safe/unsafe. Even if you were to go back in time and conduct randomised controlled trials among early homosapiens, how would you make comparisons between large groups? People’s lifestyles would have been much more varied and unpredictable, and much less categorisable, than they are today. They would have moved more often, existed in smaller groups with a wider range of cultures and traditions, had less predictable diets, suffered from different illnesses from one another, spoken a wider variety of different languages, and so on.

Today, despite cultural differences, billions of people across the world have first-hand experience of something that falls within the category of “kitchen” and something else that warrants the designation “rubbish”, and can testify that leaving one inside the other for too long makes an almighty smell that we almost unanimously consider unpleasant.

This principle applies throughout our lives. With the growth of cities, roads and transport networks, we gain many freedoms but our behaviour is more easily categorisable and thus lends itself better to causal inferences. I can travel much further than ever before, but it’s easier to gather data on, and describe, traffic flows between a limited number of transport hubs than to draw conclusions on numerous handfuls of meandering hunter-gatherers. Indeed, if you pay attention, you’ll notice that the best examples of causality are those involving human-made devices or phenomena: turning on a light switch or tap, a firing squad, the effect of weather on ice cream sales.

The first randomised controlled trials

A great example of counterfactual thinking, which Pearl cites, is Dr John Snow’s hunch that cholera was spread through dirty water, not “miasma". Snow painstakingly collected data to show that his imagined reality – a world in which people wouldn’t have contracted cholera if they hadn’t drunk sullied water – was closer to reality than the conventional wisdom, which assumed people wouldn’t have got cholera if they hadn’t breathed in miasma. He gathered data on two groups of London residents who were identical except that they drew their water from two different water companies. One company sourced its water from a point in the River Thames downstream of a sewage outlet, the other did so upstream of the effluent.

At first sight, this is a great advance in methodology only: for all purposes, it was an ideal setting for a randomised controlled trial and Dr Snow stumbled upon it by sheer luck. However, the chance of him chancing upon such a situation – if you’ll indulge me in a counterfactual – was much higher than in, say, the middle ages or prior to the agricultural revolution. At the time of Snow’s breakthrough, London’s population had more than doubled in 50 years to 2.3m and the water supply was serving ever greater numbers of people. This enabled Snow to behave as though he were observing two parallel realities playing out side by side, identical for all intents and purposes except for one factor: where their water came from.

So by modifying our world to make it more standardised and predictable, we might be gradually moving towards a more robust setup for emulating alternative worlds. We’re not really mining alternative realities, but we are getting better at turning our world into a stage on which a limited number of similar but different-enough realities can run their course. We can compare them to our counterfactuals and see whether real-life measurements support, or discount, our hunches.

Our emulations of alternative realities are becoming more vivid

Despite the exceedingly high proportion of duff counterfactuals, then, in recent centuries we seem to be getting just a little better at leveraging our innate ability to fantasise, but keeping our feet on the ground with reference to information we gather in the real world.

This could to be due to three factors:

  • Improved techniques for data gathering and manipulation, such as the cholera example above
  • A more standardised world, which gives us a greater stock of comparable data
  • Conclusions are more widely applicable, also due to standardisation, which makes research more worthwhile

So we haven’t got better at imagining alternative worlds more vividly: we’ve simply made our world emulate one in which we can compare different realities.

If you think about how single-core computer processors emulate multithreading: they don’t really do several things at once, but by switching quickly between different tasks they allow an operating system to act as though this were happening. Or software running in a virtual machine: the user doesn’t really have the necessary hardware or operating system to run their software natively, but can behave as though they did.

The same applies to counterfactual thinking: we can’t really mine alternative worlds for insights, but we can, tentatively, behave as though we could. We do this by aggregating experience from comparable situations, whether events with broadly similar characteristics that recur throughout history or take place concurrently in different locations, or both. It’s by no means perfect, but it’s better than our inherently flawed imagination.

The information revolution – from the undo function to virtualisation

With the emergence of IT and communication networks, more and more of our lives take place inside computers. This is bringing us a step closer to our ideal of perfectly verifiable counterfactuals.

The humble “undo” function, for example, gives us the opportunity to experiment a little without ruining our work. Instead of pre-imagining what every single keypress will look like – a micro-future we’re incapable of foreseeing ourselves – we take a peek into a possible future with the computer’s help. “If I had put this paragraph at the beginning, it would read better,” I might theorise. Before pressing “undo” when I see I was mistaken.

The same goes for many types of previews, whether in “what you see is what you get” word processors, precise layout designs for clients to sign off before commissioning a website or app, 3D printing, rapid prototyping, or the startup scene’s “move fast and break things” mentality – they acknowledge our inability to imagine how small changes impact our world, while refusing to give up on the ideal of exploring alternative worlds isolated from a wider reality.

Software developers take this to a whole new level. In their line of work, a tiny change in hardware configuration can make the difference between a slick user experience and a blue screen of death or, in the case of anything delivered via the internet, “500 – internal server error”.

If big sites crash when code is changed, businesses can lose lots of money, which is why developers want to know: “What would happen if I deployed this code?” To answer this, they develop, test and stage their code in environments that exactly match the live server. Solutions such as Docker, which make it possible to configure and spin up precisely specified setups using virtual machines, allow coders to answer these questions reliably.

This is very close to our ideal of alternative worlds that we can mine for insights. Each container is pretty much a clone of reality that allows coders to predict alternative futures. Granted, it only works for things that can be run within these containers, i.e. computer applications. But it bears an uncanny resemblance to the ideal of causal thinking: alternative realities that are 100% configurable and reproducible.

Where reality meets simulation

Our lives are informed by epidemiological models at the moment and will be for some time. These aspire to run a copy of reality within a computer as though it were in a Docker container. They enable us to experiment with possible interventions and preview the results. Of course they aren’t anywhere near perfect but are at the very least a useful complement to our conjecture.

Although the models aspire to be as “dirty” as the world outside, by taking account of as many variables as possible, the standardisation outlined above probably makes our world easier to model by reducing entropy.

Where we might go next

Given what I’ve said, I wouldn’t be so brash as to claim I have any idea of what the future will look like. I presume we will continue on the trajectory towards standardisation. Our lives may lend themselves ever more to categorisation, while opening up qualitative freedoms that help us feel unique.

One science-fiction scenario is that more and more of our lives take part inside computers, while our lives outside computers become less important. It becomes possible to spin up alternative realities, at least partially, and compare versions of the future, starting from various points.

William Gibson’s science-fiction novel Agency is a good model for this. Agents in a post-apocalyptic London have the ability to intervene in their past, but as soon as they do, a new “stub” diverges from that point onwards. People cannot move between the branches, but if the technology at the time allows, they can create various conduits to be present vicariously or, in underdeveloped times like our own, videoconference with the people within that stub. Are these alternative realities played out in software? I’ll have to finish the book to find out – but the similarities with technologies such as Git (version control software which usually has a “master” branch and various alternative branches that depart from different points in the project history) are striking.

So depending on how the future unfolds – information to which none of us is party – our irrational, presumably innate, unwillingness to accept we cannot predict alternative futures may shape the world around us for many years to come.