May 22, 2024
I suspect we all doubt ourselves from time to time. Unless we are sociopaths, we have to believe, at certain points, that maybe we’ve gotten something big wrong. I also suspect that’s how we have evolved to learn and therefore survive.
But if we are lucky, sometimes we get something big right. And then we see examples of that thing all around us. It’s been that way for me with “the Cult of Prediction.” I’ve fallen into doubt at times. Is this real? Is it really a problem? Or are people catching on, and it’s going away?
Then someone like Tom Chivers writes a book such as Everything Is Predictable, and the doubts are at least momentarily dissolved. Chivers writes, and I quote:
- “When we make decisions about things that are uncertain – which we do all the time – the extent to which we are doing that well is described by Bayes’ theorem. Any decision-making process, anything that, however imperfectly, tries to manipulate the world in order to achieve some goal, whether that’s a bacterium seeking higher glucose concentrations, genes trying to pass copies of themselves through generations, or governments trying to achieve economic growth: if it’s doing a good job, it’s being Bayesian.”[1]
In the same Introduction, he says, “All that we do, all the time, is predict the future. …We’re not basing [decisions] on mystical visions, but on information we have gathered in the past.”
He’s wrong. We do things other than predicting the future based on past information. But let’s stipulate that at the granular level he’s talking about, like assuming our next breath will not kill us and that the corner shop will contain granola, it’s a lot of what we do in daily life.
Later on, after describing the sort of Bayesian prediction he advocates for all decisions taken under uncertainty, he says the following:
- “Good forecasters also use the wisdom of the crowds. That is, they update their forecasts on the basis of what others say. The average of several forecasters’ predictions is likely to be more accurate than any particular forecaster’s, for the same reason as the Fermi estimates—because the forecasters’ errors tend to cancel one another out.”
This turns out to be objectively terrible advice.
Oh, it works on purely scientific things, like astronomy, particle physics, chemical reactions, and the like, where the phenomenon in question has a concrete reality that has to abide by the laws of physics, and further estimates tend to contribute to greater certainty.
But objectively measurable scientific experiments are almost never the subject of strategically important decisions under conditions of uncertainty. At least, not for the generations of organizational leaders with whom I have had the privilege of working. The realm of what the economist Frank Knight called “higher uncertainty” – where there can be no obvious basis for a percentage estimate, nor can such percentage estimates guide leaders in decision-making – is often the realm of intersubjective realities.
Intersubjective realities are agreed-upon fictions that allow human beings to act en masse in a coordinated way. Prices are agreed-upon social fictions. Currencies are intersubjective realities. “The market” and interest rates and GDP growth and even unemployment rates are all agreed-upon fictions that coordinate action across societies.
And directly contrary to Chivers’ sunny title, in the world of intersubjective realities, Nothing Is Predictable. More precisely, prediction of intersubjective realities is both unreliable and profoundly dangerous.
Let me give you what I think is an irrefutable example.
When would you most have wanted an accurate Bayesian estimate of future economic quantities? I would submit to you that you would have really, really wanted accurate estimates for economic guidance just prior to either an economic boom, or a global financial crisis. Specifically, what you would require from forecasters would be the precise timing of when the bottom was likely to drop out (or the markets zoom to the stratosphere).
Chivers/Bayes would presumably prescribe Bayesian estimates of future numbers, and also would say that one should use the “wisdom of crowds” to get the best possible estimate of future outcomes.
Fortunately, there are many examples of how this approach has actually worked out in practice (spoiler alert: not well). I’ll take one from my forthcoming book, Fatal Certainty: How a Cult of Prediction Made the 21st Century an Era of Strategic Shock –and How Rigorous Imagination Could Bring Us Back.
* * *
The Philadelphia Federal Reserve Bank publishes a regular survey of forecasters who estimate what GDP growth, inflation, etc. will be over the coming year or so. It is instructive to look at their forecasts for the fourth quarter of 2007 and the full year 2008. Under the headline “Another Round of Cuts to the Outlook for Short-Term Growth,” they forecast the following predicted rates of GDP growth for the United States for that period, both their previous estimates and their latest altered estimates. They are very reminiscent of Tetlock’s “Superforecasters,” in that every month they adjust their estimates, presumably on Bayesian principles. You might also detect a bit of intersubjective groupthink, as the estimates do not show much variation in response to perceived events.[2]
Real GDP (%) Unemployment Rate (%) Payrolls (000s/month)
Previous New Previous New Previous New
Quarterly data:
2007: Q4 2.7 1.5 4.7 4.7 114.5 114.7
2008:
Q1 2.7 2.2 4.7 4.8 113.8 100.6
Q2 2.9 2.3 4.7 4.9 114.6 75.7
Q3 2.7 2.8 4.7 5.0 121.1 119.2
Q4 N.A. 2.8 N.A. 5.0 N.A. 134.4
Annual average data:
2007 1.9 2.1 4.6 4.6 156.0 151.6
2008 2.8 2.5 4.7 4.9 118.0 103.5
As you can see, the panel of forecasters foresaw lower growth than had previously been forecast. Their estimate of GDP growth for the fourth quarter of 2007 was cut from 2.7% to 1.5%. They sliced their growth estimate for the first quarter of 2008 by six tenths of a percent, from 2.9% to 2.3%. But they saw growth bouncing back in the third and fourth quarters of 2008 to a healthier 2.5% annual rate. Average unemployment, they thought, would inch upwards a bit faster than their previous estimates, but from just 4.6% to 5.0% over the course of 2008. They still expected 2008 GDP to exceed that for 2007, though they did expect job creation to fall off a bit, from about 151,600 jobs a month to a mere 103,500 a month, down from their already more bearish estimate of 118,000 jobs created per month, itself already a 22% decrease from their previous estimate for 2007’s final number.
Obviously, the forecasters foresaw some economic bumps in the road for 2008, but felt that they would be surmounted by the second half of the year.
And here are the actual numbers, for exactly the sort of time period in which forecasters’ consumers would presumably have been most grateful for an accurate prediction:
Real GDP (%) Unemployment Rate (%) Payrolls (000s/month)
Previous New Actual Previous New Actual Previous New Actual
Quarterly data:
2007: Q4 2.7 1.5 -0.2 4.7 4.7 4.8 114.5 114.7 91
2008:
Q1 2.7 2.2 -1.6 4.7 4.8 5.0 113.8 100.6 -38
Q2 2.9 2.3 +2.3 4.7 4.9 5.4 114.6 75.7 -72.7
Q3 2.7 2.8 -2.1 4.7 5.0 5.9 121.1 119.2 -231.7
Q4 N.A. 2.8 -8.5 N.A. 5.0 6.9 N.A. 134.4 -425.7
Annual average data:
2007 1.9 2.1 2.0 4.6 4.6 4.6 156.0 151.6 67.3
2008 2.8 2.5 0.1 4.7 4.9 5.8 118.0 103.5 -192
As you can see, the estimates for 2007 were actually quite good. This should be unsurprising, since most of the statistics were already in the books for the entire year by November 13, 2007.
And there’s more good news: their revised estimate of second quarter 2008 GDP growth was right on the money!
But otherwise…not so much. Things quickly took a downward turn in 2008. Payrolls actually had begun to shrivel in December of 2007, when the recession had actually, unbeknownst to these forecasters, begun. They continued to shrink at an ever-increasing pace throughout 2008. Surprisingly, 2008 registered economic growth for the entire year, but the lowest growth you can register, 0.1%. The true cliff emerged in 2009, when GDP fell at an annual rate of minus 9% for January. Unemployment ultimately peaked at 10.0% in 2009.
So, which would you rather have had in late 2007: the consensus Bayesian forecast of these forty-eight recognized prognosticators;[3] or an appreciation for how far off the mark such economics groupthink had been in the runup to the Great Depression, and a deep anticipatory understanding of what it might mean for the worst economy in almost 80 years to be upon us?
The level of fundamental uncertainty in our world is far, far higher than most smart people – and most experts – would like to think about, much less admit.
And their refusal to do so is a problem.
* * *
Lest I be unfair to Chivers, he does say in his book that the “wisdom of crowds” approach can lead one astray.
- “But you can be more sophisticated than that. ‘The simplest thing would just be to average them,’ says Mike Story, the previously mentioned superforecaster. ‘Assume random noise is the reason why the experts disagree. But we also know that people differ in their ability to make accurate forecasts, and that can give you a clue for who to listen to. If they’ve predicted something terrible happening every six months for the last twenty years, maybe you pay a little less attention to them. But someone who’s well calibrated and has a good track record, you would pay a lot more attention to.’ In Bayesian terms, you treat forecasts from reliable forecasters as having more information-they’re like a likelihood function with a sharper peak, which moves your prior further.”
How could this possibly have worked in 2007, one might ask? I would bet you a ton of cash (using my own Bayesian priors) that it was largely the economists that “predicted something terrible happening every six months for the last twenty years” who turned out to be most “accurate” in 2007. But were they accurate, or were they simply lucky? It’s virtually impossible to tell in real time. We all would have been far better off if these economists had taken their undoubted intelligence and applied it not to prediction, but to imagining a variety of outcomes, and thinking of ways policymakers and market actors might have dealt with those outcomes.
Chivers, toward the end of his book, says the following: “As we said at the beginning: you can predict the future. You do it every single second. You’re doing it at a micro-level, and have to, if you are to successfully navigate the world and not trip every time you try to walk. You’re doing it at a very high level when you book a holiday for next year and predict that Lanzarote will still exist and that Jet2’s Airbus will fly you there.”
None of these examples are strategic, high-stakes decisions real organizational leaders might have to make dealing with fundamentally uncertain intersubjective realities. For these kinds of truly strategic decisions under “higher uncertainty,” the only approach that can help is anticipation via rigorous imagination – the development of multiple high-level scenarios of the future, to provoke us systematically to anticipate an arbitrary number of plausible and consequential eventualities.
Like, the various points at which the bottom might suddenly drop out, and all Bayesian predictions suddenly fail.
[1] Excerpts From: Chivers, Tom. “Everything Is Predictable.” Atria, 2024-05-07. Apple Books.
[2] https://www.philadelphiafed.org/-/media/frbp/assets/surveys-and-data/survey-of-professional-forecasters/2007/spfq407.pdf?la=en
[3] https://www.philadelphiafed.org/-/media/frbp/assets/surveys-and-data/survey-of-professional-forecasters/2007/spfq407.pdf?la=en