Every aspiring IR student knows that there are known knowns, known unknowns and unknown unknowns. Or that there are low probability/high impact scenarios. Or that there are Black Swans, unexpected and pattern-breaking events, made popular by Nassim Nicholas Taleb.
Just google it and you’ll find angry debates on social media on whether COVID-19 was a Black Swan event or not. To what extent politicians and experts should have been prepared for such a thing to happen. Especially since it was a known known that with globalization and travel the risks of an actual global pandemic grow, even if the when, where and how are very uncertain.
And yes, we had some assessment on that (giving a fertile ground for conspiracy theorists like those, who can claim that it was in fact Bill Gates who started the pandemic, since he spoke about it a few months before), and there were previous pandemics starting from China in the last few decades. So, is COVID-19 a Black Swan or should we just topple every government in the world for not being prepared? The same goes to terrorist attacks, let them be 9/11 style large scale operations or a single man in France wielding his knife.
Obviously, one just can’t have a contingency plan for every possible scenario. It was impossible even back in the ancient world (think King Hammurabbi, who tried to involve every imaginable scenario in his criminal code and have a specific sentence for each and every one of them). And it is definitely impossible now.
In their book packed with real-world stories (like President Barack Obama and the raid on Osama bin Laden’s hideout) and funny similes (like the one about who one should entrust with flying and landing a plane, the experts or, based on the “wisdom of the crowds” notion, the passengers) Messrs. Kay and King go even further. And they claim that most tools analysts use to predict the future are false and give a false sense of security and certainty.
In short, they claim that “perfect information” doesn’t exist and the governing principle of the real world is uncertainty. Instead of being the exception, “unknown unknowns” are in fact the rule, the majority. Our understanding of the present is so fragmentary that we cannot even assess the situation correctly; all we do is mostly guessing and/or using our intuition.
And the future? Well, forget it. Or, as Denes Gabor said: “The future cannot be predicted” (though he continued that “it can be designed”). Given that the present is chaotic, the future is inherently unpredictable.
Every narrative we create, let it be a theory of IR or a theory for economics, might be a powerful explanatory tool, with built-in limits. And, as the 2009 subprime market crisis has shown us, with the benefit of hindsight, we can all tell their flaws after the crisis hit. But not before.
So, in the authors’ view, facing the “radical uncertainty” (hence the title of the book) of the world, politicians and individuals need to make decision while being aware of their limitations. This way, they would be more resilient than if they’d base their decisions on complex models. Simple, but robust principles would be preferable over complicated calculations, proportions, ratios and risk assessment tools. Contrary to Bayesian-thinking (and the Monty Hall paradox), it is impossible to assign a probability to everything. Of course, models/narratives are necessary, but they shouldn’t become the final arbiters: they are tools to help us make sense of the world, but they are not to be used as if they were “the word of God”.
Some uncertainty is measurable, or at least can be expressed probabilistically, so it can be referred to as a puzzle, like the probability of a hurricane hitting the US in any given year. Others are mysteries, this means that since they’ve never happened before, it would be a futile attempt to actually try to determine the outcomes mathematically. Like the impact of COVID-19 on the world economy. Or the effects of Brexit for that matter.
Economy and society, basically the subjects of “social sciences” are so complex and non-deterministic that the science studying them cannot and should not aim for the same “precision and consistency” that natural sciences can achieve. While 2+2 is 4 in every possible universe (though a line might finally come back to itself in the Bolyai-geometrics what is impossible in the Eucledian one), putting 2+2 people together in a room can lead to many possible events. In social sciences, there is always the need for subjective judgements. There’ll never be a comprehensive list of possible outcomes, especially not with well defined numerical probabilities attached.
Cognitive biases are another favorite punching bag for the authors. Every political analyst has learned that analysis can be distorted by our personal biases, so during the analytical process we should identify and avoid them. Clear and simple, right? Well, the authors claim that only in small worlds are right and wrong answers clearly identified, like in model scenarios used in the classrooms. “Biases” in the real world are only rarely the results of errors in beliefs or logic. Rather, they are the products of reality itself: a reality in which we make decisions in the absence of a precise and complete descriptions of the world.
This won’t change with “big data”. Having more data won’t necessarily help us, because every question we ask, every answer we’re searching for is based on one model or another. We might find data that clearly show us a pattern, but had we asked a different question, we might have arrived to a different answer. As the authors put it, “probability analysis can turn evidence-based policy into policy-based evidence”. I really like a quote from Jeff Bezos, who claimed that “the thing I have noticed is that when the anecdotes and the data disagree, the anecdotes are usually right. There’s something wrong with the way you’re measuring”.
The authors bring automated, “robo-adviser” software as another example. When it is applied without understanding the built-in limitations, it can lead to false assessments. According to them, big data has other risks, as well. As it becomes ubiquitous, so much can be known about any individual that conventional insurance becomes impossible, simply because there won’t be enough uncertainty to pool risks.
Unfortunately, Messrs. Kay and King don’t go further. They don’t actually offer a model or a method on how, then, one should make decisions. You can get some ideas, like the need to clearly distinguish between what we know and what we think. But based on their thoughts, how do we know that we know and not just think that we know?
IN POLITICS AND BUSINESS, UNCERTAINTY IS A SOURCE OF OPPORTUNITY FOR THE ENTERPRISING
It can also be deducted that groups can still be valuable in decision making (though definitely not in every scenario, like landing an airplane). Because “humans actually can thrive even in radical uncertainty”, if and when “creative individuals can draw on collective intelligence and … operate in an environment which permits a stable reference narrative. Within the context of a secure reference narrative, uncertainty is to be welcomed rather than feared.” (…) “In politics and business, uncertainty is a source of opportunity for the enterprising, though also associated with paralysis of decision-making in bureaucracies staffed by risk-averse individuals, determined to protect their personal reference narratives.”
It might be just me, but – despite this book being a very good read – there are some questions left. We, both as human beings and political analysist, still need to make some sorts of predictions now and then, we need to identify risky situations and assumptions, try to sort out why a certain government did a certain thing and provide early warning. I bet any analyst would be fired if he/she’d answer the question, “how should we make policy in the face of COVID-19/radical Islamists threatening our country” by saying “sorry, I don’t have enough information”. The authors might be right to dismiss the notion of an objective probability distribution, but still there must be a way (because we need it) to rank outcomes and the likelihood of how they’ll change under different policies.
If we still have to or can use the models (acknowledging their limitations) then was this all just about introducing some new definitions? And so now we have known knows, known unknowns, unknown unknowns; plus low probability/high risk events; plus Black Swans AND puzzles and mysteries?