The New York Times’ Peter Coy is “a veteran business and economics columnist.” Today he sent a newsletter out that was headlined “Effective Altruism Is Flawed. But What’s the Alternative?” I have thoughts.
Coy writes,
- “You don’t hear so much about effective altruism now that one of its most famous exponents, Sam Bankman-Fried, was found guilty of stealing $8 billion from customers of his cryptocurrency exchange. …But if you read this newsletter, you might be the kind of person who can’t help but be intrigued by effective altruism. (I am!) Its stated goal is wonderfully rational in a way that appeals to the economist in each of us: To ‘help others as much as we can with the resources available to us’ based on three criteria: ‘importance, neglectedness and tractability.’ …[Alexander] Berger, the Open Philanthropy chief executive, said E.A. is based on ‘rigorous academic evidence,’ and that choosing how to give based on personal interest or experience ‘leaves way too much on the table’ — namely causes that are important but remote from donors’ lives. I get that. But I also see the logic of [MacKenzie] Scott and [Melinda French] Gates [two large donors to charitable endeavors]. I think the right approach to giving combines the rigor of an economist with the humility to realize that science will never provide all the answers to moral questions.”
Of course, if you’ve read anything I’ve written here, you may sense that “rigorous academic evidence” is data about the past, and there is no data about the future. Especially in the realms in which Effective Altruists operate, which are dominated by what Yuval Noah Harari calls “intersubjective realities” (things that are not physical objects or systems, and do not follow scientific laws of physics, chemistry, mechanics, etc., but rather are ineffable webs of meaning shared by large numbers of human minds), nothing can be predicted. Yet the entire basis of EA is prediction of the unpredictable! So I felt compelled to write the following response. It only scratches the surface of the many objections I have to the EA approach (though, as I say, at least these guys are thinking seriously about the future, so god bless).
I wrote,
- My business is helping organizations prepare for the future under conditions of fundamental uncertainty. I applaud the founding ethic of Effective Altruism, but everything in my 40+ years of work tells me that they hugely overestimate their ability to predict the effectiveness of their efforts. William MacAskill is a very sincere guy, but he lives by a set of complicated equations, each element of which is highly questionable. For example, he deems a life with less than a certain amount of “utility” as not worth living (or saving); any life above that minimum level must be preserved. But he is defining “utility” for everyone else, when others have very different senses of what makes life worth living (or not). Worse, he extrapolates these equations into the future to a preposterous degree, and the entire justification of his project relies upon him being able to predict fundamentally uncertain future outcomes – which is completely impossible. As I say, at least he is trying to take future humans seriously. But his pseudo-scientific, literally absurdly mathematical approach gives the appearance of rigor where absolutely none can ever exist. If we are to have a future as a species, we need to IMAGINE it, rigorously – to imagine how each of MacAskill’s equations (and each expression in them) could be hugely wrong, and come up with a more realistic, less precise, more expansive “future space” of plausible human outcomes. I have a book forthcoming on all this – “Fatal Certainty.”
EA is yet another example of the seductiveness of the quantitative, especially to high-achieving minds that have been gigantically rewarded for seeing the world as perfectly measurable and predictable. Spoiler alert: It is neither. The Cult of Prediction strikes again.