FTX: Effective altruism can't run from its Frankenstein's monster
Sam Bankman-Fried embodies the hardcore ideals of EA
“‘Oh, Frankenstein, be not equitable to every other and trample upon me alone, to whom thy justice, and even thy clemency and affection, is most due. Remember that I am thy creature.’”—Frankenstein by Mary Shelley
The person supposed to be the “first trillionaire” is, in many ways, a creation. Sam Bankman-Fried, affectionally known as SBF, was until recently effective altruism’s biggest funder, to the tune of not just a promised 24 billion dollars, but all his future trillions. And then, as has been discussed nearly everywhere, the FTX Cryptocurrency Exchange, which he is CEO of, stopped allowing withdrawals and all those billions and potential trillions went up in smoke, along with all his customer’s money. If in a decade barely anyone uses the term “effective altruism” anymore, it will be because of him. Imagine if Elizabeth Holmes had been the primary donor to the effective altruism (EA) movement, and that Theranos had been created specifically by the EA movement, and you can get a sense of what just happened.
It may at first seem overblown to say SBF was a Shelley-esque creation of EA, but FTX itself was essentially made in a lab in order to donate the most money to charity, to save the most lives. Here’s from a Forbes profile:
[SBF] read deeply in utilitarian philosophy, finding himself especially attracted to effective altruism. . . An effective altruist looks to data to decide where and when to donate to a cause, basing the decision on impersonal goals like saving the most lives, or creating the most income, per dollar donated. One of the most important variables, obviously, is having a lot of money to give away to begin with. So Bankman-Fried shelved the notion of becoming a professor and got to work trying to amass a world-class fortune.
Even the personnel of FTX were a direct outgrowth of the movement. Here’s Nishad Singh, an inner-circle member of FTX:
This thing couldn’t have taken off without EA. All the employees, all the funding—everything was EA to start with.
And the philosophy of the company’s approach was also based in EA:
To be fully rational about maximizing his income on behalf of the poor. . . [SBF] felt he needed to take on a lot more risk in the hopes of becoming part of the global elite. . . To do the most good for the world, SBF needed to find a path on which he’d be a coin toss away from going totally bust.
Where can you always be a coin toss away from going totally bust? Cryptocurrency! So FTX being a crypto exchange was a logical choice that SBF made based on EA. Heck, they even said so on their ads.
Personally, I’ve always had a soft spot for EA, as they’ve brought a lot of attention to causes I think are important. Particularly AI safety. And, like everyone, I think charitable giving is, you know, good. So members of EA have my sympathy; the individuals who trusted SBF personally, along with the everyday rank and file, all got sucker punched last week. The fact that billions promised in charity have evaporated is a loss for the world.
But I’ve also harshly critiqued the movement, like in “Why I am not an effective altruist,” where I argued EA had a dangerous tendency to take utilitarianism too literally. And now it appears that some members of EA. . . took utilitarianism too literally. For an organization all about ethics, it’s especially problematic since FTX money reaches deep into the movement’s charities and non-profits, and that’s just from what’s known publicly (there was untraced FTX money flowing around, I know because I was once sent a small amount: more on this later).
Now, pretty much everyone in the broader EA movement who received FTX money knew next to nothing about the level of financial risk SBF was taking on. If you’re looking for clairvoyant knowledge of FTX’s impending collapse inside EA, you likely won’t find it.
The question is not: did anyone in EA (outside of the EA members of FTX) know about its shady dealings? The question is: was the FTX implosion a consequence of the moral philosophy of EA brought to its logical conclusion? This latter question is why some people are trying to create as much distance between SBF and EA as possible.
The first attempt at distancing FTX from EA comes via the narrative that SBF didn’t know “how to properly EA.” E.g., that he took a radical non-mainstream interpretation of the movement, or was ignorant of details, etc. The second, more recent narrative, is that SBF is a sociopath who only used EA for cover. The first narrative is provably untrue. The second, a character judgement about hidden motives, is harder to falsify, but rests on shaky and conspiratorial foundations, like a screenshot of a couple curt and highly ambiguous text messages.
So, first’s first: did SBF know how to EA? For instance, perhaps if SBF had only known about “rule utilitarianism” this would have prevented FTX’s implosion—many people have responded like this on Twitter. (Rule utilitarianism being the idea that people should follow the set of rules that maximize good, rather than judging each individual action independently). Here’s a succinct version by Scott Alexander at Astral Codex Ten in reaction to the FTX revelation:
. . . in most normal situations following the rules is the way to go. This isn’t super-advanced esoteric stuff. This is just rule utilitarianism, which has been part of every discussion of utilitarianism since John Stuart Mill.
This is why, e.g., William MacAskill (who, according to The New York Times, originally pitched a college-aged SBF on “earning to give”) and other EA leaders are digging up statements they’ve previously made like "plausibly it’s wrong to do harm even when doing so will bring about the best outcome.”
The problem is that SBF has been crystal clear about his philosophical beliefs for years, if not decades, and knew all this stuff and had good arguments on his side. Here’s SBF on his blog, which is mostly about utilitarianism, way back in 2012, long predating any fame:
I am a utilitarian. Basically, this means that I believe the right action is the one that maximizes total "utility" in the world (you can think of that as total happiness minus total pain). Specifically, I am a total, act, hedonistic/one level (as opposed to high and low pleasure), classical (as opposed to negative) utilitarian; in short, I'm a Benthamite.
This jargon means that SBF occupies a well-defined position within utilitarian philosophy, which is that being moral means taking the action that maximizes the expectation of the pleasure/happiness (“hedonistic”) of the maximum number of people (“total”), and that one should calculate this based on each action (“act”), not rules. This also describes his approach to EA, as SBF interchangeably referred to “effective altruism” as “practical utilitarianism.”
And, guess what, SBF wrote an entire blog post about why he’s an “act” instead of a rule utilitarian! He gives the classic rejection of rule utilitarianism, which can also be found on Wikipedia:
It has been argued that rule utilitarianism collapses into act utilitarianism, because for any given rule, in the case where breaking the rule produces more utility, the rule can be refined by the addition of a sub-rule that handles cases like the exception.
So if someone brought it up to SBF, he would probably immediately reply: “Wouldn’t rule utilitarianism simply mean advocating for making what I did legal, assuming it was in the service of good?” He’d have such a whip-quick answer because he was familiar with all these debates (and btw, we’re still not even sure about the level of legality here).
The harsh reality is that SBF was an MIT student with two Stanford professors for parents, and over years he carefully considered different versions of utilitarianism, carefully considered different ways to weight his utility calculations, and these careful considerations led him to FTX.1 His ideas about utilitarianism were within the mainstream. Exactly like William MacAskill, who teaches a course at Oxford on utilitarianism, who has said he accepts some of the repugnant moral conclusions of utilitarianism, and who co-wrote the website utilitarianism.net. And while MacAskill is more ambivalent than most (he even has a book admirably called Moral Uncertainty), some of the founders of EA have explicitly advocated for the most extreme utilitarian stances—in a way, counterintuitive ethical examples are the big selling point of academic moral philosophy. Is it really foundational to EA that simple deontological principles, like not to lie, or even just risk user funds via shady crypto coins, completely trumps utilitarian considerations? For anyone familiar with the tone and content of the movement’s literature and discussions, this strains credulity.
Also within the mainstream was SBF’s idea of maximizing the expected value of his giving via risky business ventures. Here’s him with EA leader Rob Wiblin from 80,000 Hours:
Sam Bankman-Fried: . . . Even if we [FTX] were probably going to fail, in expectation, I think it was actually still quite good.
Rob Wiblin: Yeah.
Or, in plenty of cases, it was the EA leaders who laid out reasoning about risk taking, and SBF was the one nodding.
Rob Wiblin: But when it comes to doing good. . . you kind of want to just be risk neutral. As an individual, to make a bet where it’s like, “I’m going to gamble my $10 billion and either get $20 billion or $0, with equal probability” would be madness. But from an altruistic point of view, it’s not so crazy. Maybe that’s an even bet, but you should be much more open to making radical gambles like that.
Sam Bankman-Fried: Completely agree.
In conclusion: none of SBF’s beliefs seem particularly unusual by EA standards, except that he took these principles to such literal extremes in his own life.
Let’s say you’re walking next to a shallow pond, and in it is a drowning child. You happen to be wearing an expensive suit you borrowed. Do you go into the pond to rescue the child and therefore take the chance on ruining the suit that’s not yours?
After proposing this to you, EA’s monster lays his discolored hand on your shoulder. “Okay,” he says in a gravelly voice made of different throats, “Now scale that up and don’t ever ever stop.”
Just like in the moral philosophy thought experiments that EA is based on, SBF dove into the pond. SBF pulled the lever in the trolley problem. SBF became the serial killer surgeon, willing to butcher the few to save the many.
SBF was a very effective altruist. At least, in expectation.
What about the second narrative that attempts to distance SBF from EA? Maybe SBF was a sociopath who never really believed?
Occam’s razor is, as razors are built to be, unkind to this. For, despite the rumors swirling on Twitter, there’s no good evidence that SBF was using EA as a front.