For the normally slow online dog days of August, when the views and shares take a long-needed vacation to the beach, it’s been a busy past week. This was due to my recent post “Why I am not an effective altruist,” which was a breakdown of the dual nature of effective altruism (EA): good in practice but based on bad philosophical reasoning, and which provoked a number of responses.
The piece also coincided with a burst of EA into public consciousness, which was triggered by the Centre for Effective Altruism’s cofounder William MacAskill’s latest book being released just last week. What We Owe the Future continues to be covered everywhere from The Atlantic to NPR to The Guardian. The book is very much a direct product of the EA movement. After all, not many books have three “chiefs of staff” that led the team that worked on the book, nor two different “executive assistants,” nor four “research fellows,” nor an additional host of “regular advisors,” all of whom MacAskill himself credits as “coauthors.” The publicity of the book has been just as well-crafted as the text, so much so that it’s hard to name a major media outlet that hasn’t written about it (there’s been a few negative pieces as well, although these are in the minority). MacAskill’s omnipresent podcast tour was even the subject of joking on Twitter.
The goal of MacAskill’s What We Owe The Future is to introduce “longtermism” to the public, which also represents a pivot within the movement of EA away from traditional philanthropy (albeit it with a money-ball style) to more ambitious subjects. Longtermism means caring about our future descendents, sometimes thousands of years out. It concerns issues like leaving the Earth a good place, preventing AI from killing everyone, pro-natalism, charter cities, asteroid detection, ending factory farming to prevent animal suffering, preventing economic stagnation, and making sure we don’t accidentally lock-in terrible practices for long periods of time. All of these causes are intuitively appealing, and can be justified by many other means. This is not to say taking them as a package deal is obvious, as to MacAskill’s and other longtermists’ credit it is indeed uncommon to advocate for the future of civilization so explicitly, and the book does this very ably and aptly. However, the specific causes that make up the longtermist movement are highly degenerate (in the biological definition of the word), that is, they are justifiable in many ways (see, e.g., Matthew Yglesias’ recent piece on how little utilitarian calculus is necessary to care about AI safety).
In some ways, the book represents exactly what I wrote I hoped would occur in the EA movement (although I hadn’t read it at the time). After all, the conclusion of “Why I am not an effective altruist” was that the movement should dilute the repugnancy of utilitarianism out of EA by pivoting to things like longtermism, as long as these have broader and vaguer goals like “help humanity survive.” I described the original repugnancy that I want to see diluted as:
The term “repugnant conclusion” was originally coined by Derek Parfit in his book Reasons and Persons, discussing how the end state of this sort of utilitarian reasoning is to prefer worlds where all available land is turned into places worse than the worst slums of Bangladesh, making life bad to the degree that it’s only barely worth living, but there are just so many people that when you plug it into the algorithm this repugnant scenario comes out as preferable. . .
And here is where I think most devoted utilitarians, or even those merely sympathetic to the philosophy, go wrong. What happens is that they think Parfit’s repugnant conclusion (often referred to as the repugnant conclusion) is some super-specific academic thought experiment from so-called “population ethics” that only happens at extremes. It’s not. It’s just one very clear example of how utilitarianism is constantly forced into violating obvious moral principles (like not murdering random people for their organs) by detailing the “end state” of a world governed under strict utilitarianism. . . Utilitarianism actually leads to repugnant conclusions everywhere, and you can find repugnancy in even the smallest drop.
MacAskill’s book shows off, at least in a couple places, why this dilution is necessary. Here are some excerpts from What We Owe The Future talking about the different possible views on “population ethics,” which is supposed to be the part of the book that functions as the academic technical justification of longtermism:
There is still deep disagreement within philosophy about what the right view of population ethics is. . . Indeed, I don’t think that there’s any view in population ethics that anyone should be extremely confident in.
If you want to reject the Repugnant Conclusion, therefore, then you’ve got to reject one of the premises that this argument was based on. But each of these premises seem incontrovertible. We are left with a paradox. One option is to simply accept the Repugnant Conclusion. . . This is the view that I incline towards. Many other philosophers believe that we should reject one of the other premises instead.
Like all views in population ethics, the critical level view has some very unappealing downsides.
There is still deep disagreement about what the right view of population ethics is. . . Indeed, I don’t think that there’s any view in population ethics that anyone should be extremely confident in.
MacAskill’s personal position is to basically to throw up his hands, declare that none of the solutions to the problems with utilitarianism look very good, and we should just compromise between various repugnant theories of how to deal with populations, hoping that whatever compromise we take isn’t that bad.
To be honest the impression the reader comes away with is that the underlying justifications are. . . a bit of a mess. In my view, the problem comes down to how utilitarianism conceptualizes good and evil, which is as big mounds of dirt. As a mound-o’-dirt, moral value is totally fungible. It can always be traded and arbitraged, and more is always better.
Such a view inherently leads to ridiculous comparisons. E.g., the big mound of evil dirt that is the holocaust is some certain volume, and, if we waited long enough in history and counted up the stubbed toes, you’d one day have another mound of evil dirt with the same volume. Then you could trade one for another. And so some number of toe stubbings add up to a holocaust, which looks more like a reductio ad absurdum than anything else.
One might, quite fairly, ask whether or not this morality as a mound-o’-dirt view (and the inevitable repugnancies to follow) actually has any sort of real impact on MacAskill’s thinking. It definitely does. Just in a few places, but enough to showcase my original point about the off-putting nature of utilitarianism. E.g., in discussing whether AI might end human civilization, MacAskill has this tepid “it might be bad, who knows” take on the genocide/takeover of the human race:
. . . from a moral perspective AI takeover looks very different from other extinction risks. If humanity were to go extinct from a pandemic, for example, and no other species were to evolve to build civilization in our place, then civilization would end, and life on earth would end when the expanding sun renders our planet uninhabitable. In contrast, in the AI takeover scenarios that have been discussed, the AI agents would continue civilization, potentially for billions of years to come. It’s an open question how good or bad such a civilization would be . . . even if a superintelligent AGI were to kill us all, civilization would not come to an end. Rather, society would continue in digital form, guided by the AGI’s values.
MacAskill feels compelled to explain why AI takeover/human extinction is actually bad, and literally the only explanation he gives in this section devoted to AI-risk is: AGI might lock-in bad moral values for long periods of time. That’s it. That’s his big longtermist reason why we shouldn’t let AIs kill our descendents. That’s because human extinction and replacement with AIs might not be a bad thing, in the long run, from a utilitarian perspective. Oh, a genocide here, a genocide there, sure, it’s bad, but what if the AIs are capable of converting the entire solar system into hedonium that maximizes the happiness of the ten google-gazillion AIs in an otherwise lifeless solar system? How does the good not outweigh the bad there, and aren’t we supposed to maximize the good while being as neutral as possible? E.g., AIs can edit their own source code and duplicate themselves by clicking copy/paste—think of how many there could be when all the atoms on Earth are devoted to computation, and how pleasurable their conscious experiences could be. Think of all the dirt! An astronomical mound of good dirt!
As another example, consider MacAskill’s take on wild animals, which he argues suffer inordinately, e.g., many animals die by being eaten, often quite young. Because of his utilitarian ethics, he then proposes that:
It is very natural and intuitive to think of humans’ impact on wild animal life as a great moral loss. But if we assess the lives of wild animals as being worse than nothing on average, which I think is plausible (though uncertain), then we arrive at the dizzying conclusion that from the perspective of the wild animals themselves, the enormous growth and expansions of Homo sapiens has been a good thing.
To be clear, MacAskill is saying that wild animals have negative moral value, as in they contribute more to “evil dirt” than “good dirt” and now that there are less of them due to humans paving over rabbit warrens to make parking lots, that’s a good thing, since there’s less total “evil dirt” in the universe. This advocacy that the universe is more moral with more displacement and killing of wild animals is something I don’t think most people will get on board with. And it is based directly on the mound-o’-dirt view, failing to consider, as I wrote, that:
there are qualitative, not just quantitative, differences between various positive and negative things, which ensures there can never be a simple universal formula. And the vast majority of people can see these qualitative differences, even if they rarely articulate them. E.g., hiccups are such a minor inconvenience that you cannot justify feeding a little girl to sharks to prevent them, no matter how many hiccups we’re talking about. . .
MacAskill’s error in lumping wild animals into the “evil dirt” category is due to misunderstanding a qualitative aspect of morality,1 an old verity which goes back to Plato: it is a good when creatures enact their nature. A polar bear should, in accordance to its platonic form, act true to its nature, and be the best polar bear it can possibly be, even if that means eating other animals. In turn, humans preventing it from doing that is immoral, an interference with the natural order. This is why the vast majority of people agree that polar bears dying out is bad, even if they occasionally eat seals, which causes suffering. An understanding the poet Dean Young expressed as:
even as the last polar bear sat
on his shrinking berg thinking,
I have been vicious but my soul is pure.
But this enactment-of-platonic-essence is the sort of ancient truth that this new moral calculus has no time for.
One of the things I noticed after the initial piece is that people seem to have wildly different ideas of just how important utilitarianism is to EA. This is made more difficult because sometimes the leaders of the movement will say “I’m not a utilitarian!”—indeed, it almost seems finding a true utilitarian is as hard as finding a true Scotsman. William MacAskill is himself an example of this, claiming to not be a utilitarian on Twitter (although at least he admits to being a Scotsman), but most of the arguments he gives for longtermism in his latest book are direct appeals to mound-o’-dirt-style utilitarianism. What I suspect is that when EA leaders say “I am not a utilitarian” it really means they don’t believe in some highly-specific version of utilitarianism (“total utilitarianism,” etc), or they've abandoned the most extreme undiluted version of it by adding in a bit of water. Alternatively, some simply drink the poison and declare that utilitarianism is tautologously correct.2
But is dilution really necessary? Are any of the repugnancies of utilitarianism represented in the movement? We’ve already seen some in What We Owe The Future, but in addition here’s a collection of recent tweets from different people who self-describe as effective altruists. I did not go looking for these, these are literally just tweets I have stumbled across in the past couple weeks while I’ve been thinking about these issues. All of them are versions of the repugnancies I mention in “Why I am not an effective altruist.”
The groups facing the greatest oppression in today’s society are non-human animals, future generations, and non-offending pedophiles.
It’s not morally wrong for someone to have sex with their dead dog, provided no one knows about it and the body is put back. The disgust you feel at this is not an argument against it.
Structural violence is necessary to slow-down AI progression, and we should consider putting assassination bounties on top AI researchers.
I’m not going to source these, since I would never single out individuals for online opprobrium. Even in the public square of Twitter the context of quotes are meant for people’s specific circles, so please do not search for them, as additionally all had words randomly changed by me, or are paraphrases, to ensure finding them is difficult. But they are all great examples of what I called the “poison” that’s off-putting about the EA movement and that I suggest diluting out as much as possible. And yes, other moral philosophies have repugnancies as well, I just don’t think they lead as inexorably or naturally to them.3
All to say: while in-practice the EA movement gets up to a lot of good and generally promotes good causes, its leaders should stop flirting with sophomoric literalisms like “If human civilization were destroyed but replaced by AIs there would be more of them so the human genocide would be a bad thing only if the wrong values got locked-in.”
A brief coda. For perhaps there is a simpler reason why I am not an effective altruist. In its current incarnation in America, EA is a philosophy both associated with, and in some ways metaphysically entwined with, the state of California and the West Coast of the United States. Which has always been utopian, in a way. It’s literally in the air, which, when I visited last month, didn’t feel like air at all. I couldn’t even tell I was outside, so innocuous was the temperature. In such beautiful country, and such perfect weather, and amid the blazing technological, financial, and cultural ascension of Silicon Valley, thinkers ensconced in that new world surely must feel a utopia is possible—and if utopia is possible, there must be some calculus to get us there, and utilitarianism is that calculus.
In contrast, I grew up on the East Coast. Lovecraft country. It snows here, and the buildings creak. It is not necessarily dour—I live on Cape Cod and spend many summer evenings reading on the beach. But the East Coast is more weathered, literally, and also much more connected, in everything from its culture to its Ivies to its food, to the old-world European traditions just on the other side of the sea. And these old-world ideas and culture have often been skeptical of the possibility of utopia. Even our religions had a certain pessimism—we are the descendants of people who forbid mirrors. I personally find the utilitarian core of EA to be severed from the accumulated wisdom of our culture and its morality, which evolved over millennia, from Hammarabi’s code to Plato to Christianity to French existentialist philosophy, a long and often contradictory moral tradition that cannot be summarized by an equation.
So maybe this is just an unavoidable affectation of my origin. Perhaps much like wine minds have a certain terroir from their environment, an irreducible flavor or taste. And mine just doesn’t pair well with utopian calculus.
The most common objection to the original piece was to worry that there is some problem with qualitative differences, like: “If there’s no equation telling us it’s so, how can we even make comparisons? If they are qualitatively different, how can we be sure that a holocaust is worse than a stubbed toe?!” Yet, an inability to compare is not necessitated by qualitative differences. It’s totally possible to both say that the holocaust is worse than a stubbed toe while still maintaining that no number of stubbed toes adds up to a holocaust. (I’m bring up holocausts or genocides as reflexively as moral philosophers because it’s a really easy way to say Very Bad Thing.) It’s simply that there’s no math to be done in the subbed toes vs. holocaust comparison, it is a qualitative difference. And it is prima facie coherent that people can make both qualitative and (in cases where one can) quantitative judgements about some things being worse than others.
Sometimes utilitarians will tautologously argue that: “Whenever you think outcome X of utilitarianism is bad, I come up with some n-th order degree effect, which, once considered by an all-knowing-being (me), means this bad outcome isn’t actually recommended by utilitarianism.”
It’s just a game of adding outcomes or assumptions into the thought experiment that benefit the utilitarian, e.g., consider this reply to my original piece on the EA forum (some of my replies to the replies are abridged and updated from there):
The world with rogue surgeons hunting people on the streets is a world where billions of people would be outraged and terrified. That all should go into the equation. It is not a world that utilitarianism would recommend.
There are three obvious points this misses: (a) saying that repugnant acts, like helping a serial killer surgeon, are stymied because "billions of people would be outraged and terrified” is a way-too-strong claim. It’s also irrelevant, because (b) the repugnancy is found in that the philosophy recommends supporting serial killer surgeons if you can get away with it. Finally, (c) one can always add n-th order imaginary background conditions that make it come back out to the original conclusion of repugnancy, like how there is a clear mid-point of possibility where a little bit of butchering people in alleys is okay, just not enough to cause global panic, etc, and a good utilitarian should always take steps to find that butchery/panic mid-point in order to get away with as much butchering as possible.
The last common reply is that one can find repugnant conclusions in all moral systems. I’m not sure how one would prove this, but regardless, there can still be greater and lesser degrees of repugnance, as well as the ease at which it is arrived at. E.g., the original repugnant conclusion is basically the state of the world should utilitarianism be taken literally—a bunch of slums stuffed with lives barely worth living. This is because utilitarianism is based on treating morality like a market and performing arbitrage. Other moral theories, which aren’t based on big mounds-o’-dirt, like perhaps rights or duties or values or, heck, platonic essences (just to throw out examples), don’t implicitly suggest maximization, because you’re not encouraged to arbitrage due to the fungibility of all the “dirt.” Therefore, they don’t lead as inexorably to repugnant conclusions. Additionally, if one honestly believes the relativism of “all moral systems are equally bad,” why not be a nihilist, or a pessimist, or think that “morality” is just a word that hairless apes use to mean “stuff that makes hairless ape tribe grow strong, beat other tribes!”
The fundamental error in utilitarianism, and in EA it seems from your description of it, is that is conflates suffering with evil. Suffering is not evil. Suffering is an inherent feature of life. Suffering is information. Without suffering we would all die very quickly, probably by forgetting to eat.
Causing suffering is evil, because it is cruelty.
Ignoring preventable suffering is evil because it is indifference.
But setting yourself up to run the world and dictate how everyone else should live because you believe that you have the calculus to mathematically minimize suffering is also evil because it is tyranny.
Holocausts are evil because they are cruel. Stubbed toes are not evil because they are information. (Put your shoes on!)
The diluted form of utilitarianism that makes the most sense to me, and which does still feel compatible with the general EA ethos, is one in which you don’t feel constrained by the results of the utilitarian calculus, but you should actually make the effort to do the math before deciding.
For example, before choosing where to give money to charity, I think it’s very much worthwhile to try and do some kind of calculations to compare them. This forces you to actually consider all the factors and identify your unknowns. But your decision should still be based, in the end, on what seems like the best choice, even if one has a higher expected value on paper.
This isn’t a formal philosophical way of looking at things, but it seems like it avoids the failure modes where you don’t do the math and end up donating to local causes that don’t need as much help as international ones AND the failure modes where you would trade massive numbers of stubbed toes for the Holocaust.