The Economist’s annual preview of the coming year once again includes predictions. And that is a problem. To show you why it was a problem for 2023, let me share some slides I presented recently to the Intelligence Leadership Forum. The tl;dr version: Things that can be predicted with certainty are almost never strategic; things that cannot be predicted with certainty MUST be handled using rigorous imagination to generate the fullest practicable range of plausible outcomes. Predictions of the “Superforecasting” type simply cannot be used to make strategic decisions. I’d say they are “nice to know,” but they do not even qualify as knowledge.
My critique of last year’s Economist Superforecasting:
Rise of the “Superforecasters”
- Philip Tetlock wrote Expert Political Judgment: How Good Is It? How Can We Know? in 2005
- Tested 284 political pundits on their prognostication skills in the 1990s
- Shocking results: Pundits were bad at prediction
- They would have been better off assigning an equal probability to each of the three alternatives
- Nor were they better at prediction within their area of expertise
- Headline result: “Dart-throwing monkeys would have done a better job at prediction than political pundits”
The followup: Superforecasting (2015)
- Tetlock’s next project: Is it possible to train (or identify) better predictors?
- “The Good Judgment Project”
- “IARPA posed 100-150 questions each year to research teams participating in its ACE forecasting tournament on topics such as the Syrian civil war, the stability of the Eurozone and Sino-Japanese relations. Each research team [had] to generate daily collective forecasts that assign realistic probabilities to possible outcomes.” – Washington Post
- “GJP won both seasons of the contest, and were 35% to 72% more accurate than any other research team. Starting with the summer of 2013, GJP were the only research team IARPA-ACE was still funding, and GJP participants had access to the Integrated Conflict Early Warning System.” – Wikipedia
- Superforecasting 2023
The “Superforecasters” are still used today
The Economist used them to predict 2023 events in November 2022
- Global growth (between 1.5% and 3%; actual so far appears to be 3%)
- Outcome of Turkish election (71% re-election of Erdogan; occurred)
- Will there be a general election in Britain in 2023 (90% no; appears accurate)
- China GDP growth (3.5% to 5%; latest estimate is 5%)
- Will China or Taiwan accuse other side of using weapons against its side? (83% no; so far, so good)
- Will Russia detonate nuclear weapons in Ukraine? (95% no)
- Will Putin cease to lead Russia by 10/1/23? (91% no; “correct”)
- When will Russia and Ukraine announce/sign a peace deal? (55% after April 2024)
- Who will win Nigeria’s election (53% All Progressives Conference; correct)
Back to Nate Silver…
“We need to stop, and admit it: we have a prediction problem. We love to predict things—and we aren’t very good at it. … If prediction is the central problem…, it is also its solution.”
WRONG
Prediction itself is the problem!
Specifically… prediction of strictly unpredictable things
Back to those predictions the Superforecasters made for the Economist…
Global growth (between 1.5% and 3%; actual so far appears to be 3%)
Outcome of Turkish election (71% re-election of Erdogan; occurred)
Will there be a general election in Britain in 2023 (90% no; appears accurate)
China GDP growth (3.5% to 5%; latest estimate is 5%)
Will China or Taiwan accuse other side of using weapons against its side? (83% no; so far, so good)
Will Russia detonate nuclear weapons in Ukraine? (95% no)
Will Putin cease to lead Russia by 10/1/23? (91% no; “correct”)
When will Russia and Ukraine announce/sign a peace deal? (55% after April 2024)
Who will win Nigeria’s election (53% All Progressives Conference; correct)
Question: What decisions could you make as a result of these predictions?
Global growth (between 1.5% and 3%; actual so far appears to be 3%)
Outcome of Turkish election (71% re-election of Erdogan; occurred)
Will there be a general election in Britain in 2023 (90% no; appears accurate)
China GDP growth (3.5% to 5%; latest estimate is 5%)
Will China or Taiwan accuse other side of using weapons against its side? (83% no; so far, so good)
Will Russia detonate nuclear weapons in Ukraine? (95% no)
Will Putin cease to lead Russia by 10/1/23? (91% no; “correct”)
When will Russia and Ukraine announce/sign a peace deal? (55% after April 2024)
Who will win Nigeria’s election (53% All Progressives Conference; correct)
My answer: NONE.
For every one of these questions, if its outcome was important to decisions that leaders would have to make, even a 90+% “probability” would not eliminate the need for contingency planning. Each falls under the category of the fundamentally uncertain.
Leadership Is About Decisions.
- When it comes to mission-critical decisions, “53%” or even “91%” is not good enough.
- If the impact of the remaining 47% or even 9% chance is high enough, you must plan for that eventuality as well.
- Probabilities are extremely misleading because to human minds they tend to break down to “not gonna happen” or “gonna happen.”
[In short: prediction of the type that the Superforecasters engage in can never be more than a parlor trick. If what they are predicting is precise enough to matter for strategic purposes, it cannot be certain enough to be the basis of any decision. Strategy is not about prediction. It is about imagining the fullest variety of conditions under which strategic decisions might have to be made. It can never be a “this or that” choice, because “this” and “that” far too rarely turn out to be the actual alternatives of strategic interest. I would say the “Superforecasting” output is merely “nice to know,” but it does not even qualify as “knowledge.” IMAGINATION, people. Not prediction.]