August 21, 2023
Imagine a two-by-two matrix containing all major issues regarding the future.
(That’s what most people think we consultants do, anyway. Two-by-two matrices.)
On the vertical axis, you have predictability, ranging from low to high as you move up the page.
On the horizontal axis, you have the level of strategic importance of the issue in question, once again ranging from low to high, as you move from left to right on the page.
If you had not spent your career dealing with future uncertainty for clients, you might suppose that the four quadrants of this diagram (Low predictability/Low strategic importance, High predictability/Low strategic importance, High predictability/High strategic importance, High predictability/Low strategic importance) would be populated about evenly. That’s a seemingly rational “null hypothesis.”
But it turns out not to be true.
You see, it turns out that the vast majority of issues having to do with the future are in only two of the four quadrants.
One of them is the “High predictability/Low strategic importance” quadrant. If you think about this for a second, you’ll understand why.
Things that are predictable generally cannot be of high strategic importance. To the extent that something is predictable, (a) you probably know exactly what to do about it; and (b) everyone else in the competitive sphere in which you are operating probably knows what to do about it too.
When something is pretty much cut and dried, well-understood, it probably cannot be of strategic importance.
The converse also applies. If something is of high strategic importance, it almost certainly is unpredictable. So the “Low predictability/High strategic importance” quadrant is going to be highly populated as well.
The other two quadrants, therefore – “High predictability/High strategic importance” and “Low predictability/Low strategic importance” will be sparser, at least if we only count future eventualities that are relevant to our world of work.
Now, it is possible that there is an infinity of “Low/Low” items – things we don’t need to know for planning purposes that also are hard to predict. You probably don’t need to know whether the universe is ultimately going to collapse in on itself again in a trillion years, or whether it will simply expand until all the stars wink out one by one. At least not for Fiscal ’25 budgeting purposes. So, nice as it would be to know, it is not “strategic” for our purposes.
But it’s the last quadrant I am most interested in today. It is the “High predictability/High strategic importance” quadrant. And this is the arid savannah upon which the beasts of the Cult of Prediction roam.
If what I have said up to now is convincing, that strategically important issues for planning purposes live in the quadrant where predictability is also low, then a lot of what passes for serious thought about how to deal with future uncertainty is, in fact, highly unserious.
Dozens of tomes on forecasting (even “superforecasting”), Bayesian probabilities, game theory, “noise,” and, generally, data-driven, past-experience-based algorithmic planning for the future have hit the bestseller lists over the first two decades of the twenty-first century. Superforecasting, The Signal and the Noise, (to perhaps a lesser extent) Thinking, Fast and Slow, and Noise: A Flaw in Human Judgment, and many others have caused planners worldwide to believe that if they can only take what is predictable and predict it, everything will be all right, and they will get their promotions, bonuses, etc. etc.
But the predictable is not the strategic. And the strategic is not predictable.
A cult of prediction has arisen to assuage the uncertainties of managers and leaders and their staffs, and has caused a generation of them to use the tools of prediction on things that are simply not predictable.
These people are not dumb. Some of them are perhaps the smartest of the smart. But their mathematical rigor is wasted on the targets upon which it is trained. When Goldman Sachs’ models of the mortgage market in 2007 stopped accurately reflecting what was happening, one of Goldman’s top people said, “We were seeing things that were 25-standard deviation moves, several days in a row.” As some Irish statisticians noted a year or two later, the level of probability indicated by this was equivalent to the chance of winning the U.K. lottery 21 days running. It was the equivalent of this very, very intelligent man shouting to the portion of the world that understood statistics, “Only fools would trust our modeling!”
And that is because the truly strategic – in this case, the infernally complicated and covariant and unknowable actions and interactions (and inactions) of tens of millions of American borrowers and lenders and regulators and analysts and investors – is not predictable.
It is a testament to the power of the Cult of Prediction that, not only did the executive in question not get fired, or demoted, or otherwise shamed or shunned; he remained in his position for the next six-odd years and retired a very rich man. Because despite some smirks and japes from the investment community at his statement, the reaction in general was, to put it nicely, “Poop occurreth.” Bayesian algorithmic modeling of reality was the only game in town. Occasionally it would not work. But there was no alternative to it. It was prediction or nothing.
Essentially, what the worshippers of the Cult of Prediction are doing is pretending that things that are properly classified in the “Low predictability/High strategic importance” quadrant are actually in the adjacent “High predictability/High strategic importance” quadrant. And the Cult is so strong that Very Serious People are almost unanimous in assuming that there is no alternative to this pretense.
But there is an alternative to blindly pretending the strategically unpredictable is predictable, and we had all better get better at that alternative.
That alternative is rigorous imagination. As my book Fatal Certainty illustrates.