August 16, 2023
Πόλλ’ οἶδ’ ἀλώπηξ, ἀλλ’ ἐχῖνος ἓν μέγα
“The fox knows many things, but the hedgehog one great thing.” – Fragment from the Greek poet Archilochus, c. 680-645 B.C.E.
I am a hedgehog.
And I’m supposed to be a fox. Nate Silver, in his book The Signal and the Noise, says we should all be foxes.
“Hedgehogs are type A personalities who believe in Big Ideas—in governing principles about the world that behave as though they were physical laws and undergird virtually every interaction in society…. But foxes happen to make much better predictions. They are quicker to recognize how noisy the data can be, and they are less inclined to chase false signals. They know more about what they don’t know.”
Excerpt from Nate Silver, The Signal and the Noise
In this, Silver echoes Philip Tetlock, whose book Superforecasting lauds presumed foxes and bashes hedgehogs:
“Animated by a Big Idea, hedgehogs tell tight, simple, clear stories that grab and hold audiences. … Foxes don’t fare so well in the media. They’re less confident, less likely to say something is “certain” or “impossible,” and are likelier to settle on shades of “maybe.” And their stories are complex, full of “howevers” and “on the other hands,” because they look at problems one way, then another, and another. This aggregation of many perspectives is bad TV. But it’s good forecasting. Indeed, it’s essential.”
Excerpt from Philip E. Tetlock & Dan Gardner, Superforecasting
So Nate Silver and Philip Tetlock advocate using “foxy” approaches to forecasting: constantly getting new information, letting it alter their predictions, using percentage probabilities, and being straightforward about their uncertainty levels. This “foxier” approach to prediction, they say, is a superior approach to that of the pundit class, which is rewarded handsomely for absolute certainty, no matter how many times they are wrong. Silver says, in his book, “We love to predict things—and we aren’t very good at it. … If prediction is the central problem…, it is also its solution.”
And that’s why I call Nate Silver and Philip Tetlock “hedgehogs.”
Sure, they are “foxy” in their approach to prediction.
But prediction itself is their Big Idea. And prediction applies to an extremely limited, and often nonstrategic, sphere of problems. You can predict in realms of hard science. You can predict in poker, chess, and other such closed games. You can use prediction in things like baseball and football and basketball, which aren’t entirely closed games, but they are games the rules of which do not change suddenly, and in which past data, especially about individual athletes and teams, can, generally, be relied upon to predict future performance.
Finance is a sphere in which prediction can be used. Not because it does a reliable job of telling us what is going to happen (anyone remember 2008?), but because widely shared mental models of the future can be used to hedge one’s position, because other people can be found to accept the risk you fear, for a certain price. But as soon as you hedge your position – exchanged a certain risk for money – that hedged risk has in an important sense ceased to be of strategic interest.
Nate Silver and Philip Tetlock are not really foxes. They are hedgehogs. I do not doubt their sincerity at all. But if they really want to be foxes, they need to stop seeing Bayesian numerical prediction as the only legitimate way to deal with future uncertainty. They need to broaden their portfolio of approaches to future uncertainty, especially in areas of what the economist Frank Knight called “The higher uncertainty.” Just over a century ago, Knight wrote, “In general the future situation in relation to which we act depends upon the behavior of an indefinitely large number of objects…. It is only in very special and crucial cases that anything like a mathematical (exhaustive and quantitative) study can be made.”
So Nate and Philip are not really “foxes” at all.
And neither am I. I’m not a “Type A personality,” I don’t think. But I do know one big thing. The “Big Idea” I proudly flog, at least until I am convinced it is over-relied-upon, is rigorous imagination: imagining the full range of plausible futures that could face the organization in question, without respect to their probability (above a threshold of plausibility), and planning for each of them as if each is going to happen.
Until the Big Idea of the Cult of Prediction has been revealed as the very limited and often misleading and dangerous game that it has proved to be in this young but aging century, I am content to be a hedgehog for rigorous imagination. I hope my book Fatal Certainty hastens that process.