October 22, 2023
The Washington Post has an article today about Sam Bankman-Fried, the cryptocurrency mogul who is now awaiting trial on charges of fraud. It starts with the following paragraph:
‘As Sam Bankman-Fried (popularly known as SBF) gets ready to take the stand in his own trial, some fundamental questions remain unanswered. Did SBF think he was only trying to do good, and should that matter? What is “effective altruism,” the philosophy that supposedly guided his decisions? And why did we all fervently want to believe that a flashy tech entrepreneur could change the world for the better?’
Two issues seem important to me here. One, the question of whether “SBF” thought he was only trying to do good. The other: even if he was sincere, should he have relied on Effective Altruism to guide his actions? In other words, is “Effective Altruism” a serious approach to planning for the future?
In my distant (some 8,000 miles southwest of the trial site) opinion, the first question gets at a very important point about the world we live in today, a sort of cognitive black hole that is sucking more and more prominent (and less than prominent) human beings into its maw.
The answer to the second question, in my opinion, is an emphatic “no,” despite my respect for the people who invented the concept of “effective altruism.” I believe they were very sincere, if mistaken; but their philosophy falls prey to the Cult of Prediction that is the central thesis of my forthcoming book, Fatal Certainty: How a Cult of Prediction Made the Twenty-First Century an Era of Strategic Shock – and How Rigorous Imagination Could Bring Us Back.
- Did SBF believe he was simply trying to do good?
I cannot look into the mind of the person in question, so this is an impossible matter to judge. But it LOOKS like an example of a phenomenon I, at least, have seen more and more of since the turn of the century: people placing themselves in a sort of cognitive/emotional state in which the issue of the truth or falsehood of things in which they purport to believe does not even come up. Usually this Bermuda Triangle of cognition arises when people are expressing belief in things that make them feel better about either themselves, their pre-existing prejudices, or simply the naked pursuit of their own self-interest.
In this particular case, Sam Bankman-Fried had made a pledge to donate all the money he was going to make out of his crypto-based empire to charity. By one of the moral principles of Effective Altruism, it can often be a better idea to make a lot of money in the present and pile it up, rather than donating it immediately, if by piling it up you are enabled to make much more money, and do far more good, for future beings, whose moral worth is stipulated to be equal to ours in the present.
The potential for moral hazard here is obvious. “EA” has had huge appeal to tech billionaires, and is widely supported by them (monetarily, in some cases). Anyone telling you that, not only is your pursuit of filthy lucre morally okay, it is in fact morally SUPERIOR to giving that money away, is going to seem very appealing to you if you are already predisposed to piling up your wealth and keeping it… for now. So separating out true belief in “EA” from either cynical use of the philosophy or simple self-delusion becomes a very hairy proposition. It’s so easy to get sucked into a very profitable zone of cognitive and moral uncertainty, a sort of reverse Bermuda Triangle for wealth.
But more broadly, there is a New Reductionism out there that seems to be shared by many, many human beings these days, in which “Do they believe it?” becomes an unanswerable question. I cannot say for certain that its rise has been the result of the new phenomenon of the World Wide Web, and its dizzy multiplication of “facts” and data. But I sense that there’s a link there. When you have a strong set of pre-existing opinions, and there exists a mechanism that is designed to cater to any strong opinion you happen to have, and to reinforce that opinion by providing you more “evidence” of its truth, and more “evidence” of the scurrilous falsity of any contrary opinion (and the evil nature of those who hold those contrary opinions), well, it is possible that human beings can be made to “believe” just about anything.
The “Reductionism” part comes after the believer has been tenderized by repeated doses of “evidence” that they are right and anyone who disagrees is evil. Once you have read or watched or heard hundreds of reports about your opponents to the effect that, e.g., they not only differ with your side (which is “normal,” “rational,” “traditional,” “patriotic,” “intelligent,” etc.), but they want to destroy your country and bring about a revolution in which all “good” people’s way of life and freedoms will be sacrificed to an extremist new regime, then, even if you do not believe all these extreme theories about your opponents, you are almost certainly now more likely to say to yourself, “Sure, some of these theories are nuts, but you know what? I think these people really ARE dangerous, want to take away my freedoms, and want to destroy our country and pervert our children’s minds.” (The fact that it is also the opponents’ country, with their own kids in it, seems to disappear into the Bermuda Triangle of Cognition.)
I’ll give an example. When news was leaked of top secret National Intelligence Estimate presented by the Intelligence Community to President George W. Bush in 2005 that indicated that Iran did not plan to pursue nuclear weapons, liberal outsiders crowed that Bush must have been crushed that the report had leaked, because clearly he was 100% convinced Iran was pursuing nuclear weapons. For several years, liberals believed that some person of conscience within the administration had decided that the rush to war with Iran on the part of Bush administration hardliners (and Bush himself) had to be forestalled, and we had been saved from another disastrous war by this secret liberal.
At the beginning of 2009, David Sanger of the New York Times published his book The Inheritance: The World Obama Confronts and the Challenges to American Power. It confirmed the theory that some unknown person had leaked the results of the report. The name of this unknown liberal? George W. Bush. The President had told his senior officials, including an incredulous Vice President Dick Cheney, that he had been accused of covering up intelligence evidence that Iraq had no WMD programs, and he was not about to repeat the mistake. He ordered the headline findings of the report to be leaked himself.
This data point should have served to complexify the reductionist view that many people had previously held of that administration. But I would submit that it almost certainly did not succeed in doing so for most. I think humans are quite capable of achieving a warped version of F. Scott Fitzgerald’s phrase, “The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.” In our new 21st century version, “The test of a first-rate ideologue is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to ignore the implications of the inconvenient one.” Many of us have found a liminal space in which we can both believe and not-believe the exact same propositions, without, to mangle Fitzgerald’s predecessor Keats’ formulation, “any irritable reaching after fact and reason.”
I pick on liberals here, but I of course have my own opinions about who the real worst offenders are today on this score. Of course we know what the man says about opinions.
So we beat on, boats against the current, borne back ceaselessly to our pre-existing views.
2. Should SBF have relied upon Effective Altruism to guide his actions?
The second question is more straightforward, in my view. No, Effective Altruism is not a reliable moral guide to current actions intended to affect the future. This despite real respect I have for its founders, Will MacAskill of Oxford in particular. At least, unlike so many these days, he is thinking seriously about the distant future, and imagining conditions far different from current ones.
The remainder of this post is excerpted from Fatal Certainty:
…In the year 2022, the Oxford professor of philosophy William MacAskill published his second major book, What We Owe the Future. It became a bestseller, then was overtaken by headlines about financial (and other) supporters of the movement it was attempting to promote: “longtermism,” “the idea that positively influencing the longterm future is a key moral priority of our time.”[1]
…“Longtermism” says that the interests of future generations should be accorded some weight, as they will be as human as we are. (Yuval Noah Harari might dispute that; he thinks that technology (IT, genetic alteration) will make human beings of the future far different from the humans of 2023.)
Unfortunately, the basic approach used partakes of many of the ills the book you are reading is being written to fight against. It uses prediction – in this case, “Expected Value” – to decide what will be. And those predictions are about things that are themselves utterly unpredictable. In addition, the terms of the models used for these predictions are less than mathematically precise, and liable to lead those deploying them to perhaps unwarrantedly strong conclusions.
In an appendix, MacAskill gives his model for assessing potential actions to influence the state of the future. The model uses three measures to evaluate potential future states of affairs:
Significance is the average value of that state of affairs over time.
Persistence is how long that state of affairs lasts.
Contingency is the proportion of that time that the world would not have been in this state of affairs anyway.
…Significance = df [Vs(p) − Vs(q)] ∕ [Ts(p) − Ts(q)]
Persistence = df Ts(p)
Contingency = df [Ts(p) − Ts(q)] ∕ Ts(p)
These three terms multiply together to give Vs(p) − Vs(q), or the total value contributed from being in a state of affairs s, given p rather than q. That is: significance × persistence × contingency = longterm value.
Because these multiply, we can intuitively compare different longterm effects: between two alternatives, if one is ten times as persistent as another, that will outweigh the alternative being eight times as significant. [2]
Prediction of the completely unpredictable, unfortunately, is implicitly at the very heart of this approach.
The assumption is that we can be sure that the action in question will have the effect “p” that MacAskill feeds into the equations. In addition, he assumes that he can predict the amount of time “T” for which the effect “p” will cause the state “s” to be in. But the world is a complicated, infernally interdependent system. Changing one aspect of the world will inevitably change many other aspects of the world, in unpredictable ways.
Another key assumption is that the value “V” of a certain state of affairs can be measured with at least some certainty. But there is no uncontested standard of value for future states of affairs. In fact, there is no uncontested standard for measuring the value of present states of affairs. Much of politics is a fight about exactly this: what is good, what is bad, and who decides?
Much of our economy is also based on disagreements as to the economic value of certain goods and services. Trade would presumably slump to terrible lows if people did not place different values on the same goods or services.
This gets to another objection to this approach: it is also obvious that different people like different things (and hate different things). Taste is individual. MacAskill makes a number of sweeping statements that presume that most human beings think as he does: that a life substantially less pleasant than the one he is living now (as a relatively affluent white male European) would be a life not worth living. A sample:
‘Imagine that you personally had the option of dying peacefully or a fifty-fifty chance of living in either eutopia, with the highest heights of flourishing, or anti-eutopia, with the deepest trenches of misery. I would certainly choose to die peacefully rather than to take the gamble, and I suspect that most people are the same.’[3]
But we don’t have to look far, either in this present world or the past, to find that MacAskill appears to be wrong about this. Some labor camps in North Korea approach MacAskill’s “anti-eutopia” pretty closely, if refugees from it can be trusted to be telling the truth about such things[4] as being raised from birth within them, being tortured and pushed to inform on family members, watch them be executed, etc.
Yet the rate of suicide in such camps appears to be fairly low – as it was in the extermination camps of World War II. If the actual people living in “anti-eutopias,” who presumably could find the means to “make their quietus,” as Hamlet puts it, choose to remain alive, it appears that “most people” may not be “the same.”
…It is a shame that Sam Bankman-Fried, the CEO of the cryptocurrency exchange FTX, had, by 2022, become one of the most visible backers of the “longtermist” movement (as well as the single largest donor to Democratic party candidates and PACs). When FTX blew up and “SBF” was arrested, many people were quick to engage in schadenfreude about MacAskill, “Effective Altruism,” the movement that spawned “Longtermism,” and “Longtermism” in general. MacAskill immediately and loudly denounced any wrongdoing, and, as is his wont, was ready to “walk the talk” and take responsibility.
“Sam and FTX had a lot of good will — and some of that good will was the result of association with ideas I have spent my career promoting,” the philosopher William MacAskill, a founder of the effective altruism movement who has known Mr. Bankman-Fried since the FTX founder was an undergraduate at M.I.T., wrote on Twitter on Friday. “If that good will laundered fraud, I am ashamed.”[5]
Schadenfreude from a lot of cynics who certainly have not, like MacAskill, donated a huge share of his annual income to the most effective causes he could find for aiding humanity, is not a particularly attractive reaction. MacAskill is sincerely trying to help humanity. His work has caused many people to study neglected threats and opportunities. As I say, he is “on our team.” But now maybe he can try to incorporate some rigorous imagination into his process.
[1] Excerpt from William MacAskill, What We Owe the Future, © 2022 https://books.apple.com/us/book/what-we-owe-the-future/id1598860905
[2] Excerpt from William MacAskill, What We Owe the Future, © 2022
[3] Excerpt from William MacAskill, What We Owe the Future © 2022 https://books.apple.com/us/book/what-we-owe-the-future/id1598860905
[4] https://www.cbsnews.com/news/prominent-north-korean-defector-shin-dong-hyuks-story-questioned/
[5] https://www.nytimes.com/2022/11/13/business/ftx-effective-altruism.html
P.S. I overlooked the following obviously fallacious sentence – “And why did we all fervently want to believe that a flashy tech entrepreneur could change the world for the better?”
Whenever you read anything that purports to speak as “we all,” you are 100% guaranteed that it is trying to get away with something.