157 Comments

Seems to me that the standard approach to criticizing any philosophy - personal, political, economic,... - is to use exceptions to negate the rule (except for when your own philosophy is in the firing line).

The hubristic notion that humans can develop (or is it identify?) a rule that can cover all situations is the source of so much animosity AND idiocy. Humans are flawed and limited, so anything that results from their efforts will be flawed and limited.

Why can't we just accept that and "pursue perfection" instead of "demanding perfection". Perfection can never be attained, but pursuit of it is a worthwhile activity.

Expand full comment

This particular article does not use exceptions to prove the rule. Rather, it shows by the inexorable outcome of reason that the philosophy is flawed and limited.

I think that the concept of perfection is a delusion. All philosophical systems are incomplete, and thus incapable of being perfected, in that they require at least one prior which cannot be proved by the system itself - I suspect we agree on that.

So I believe that even pursuing perfection - again referencing the realm of ideas - is as dangerous as demanding it is.

Expand full comment

But as a friendly amendment, I suggest that pursuing “excellence” might work for both of us.

Expand full comment

That may be a better word to use for most people.

I would conjecture that the best of the best of the best - people like Michael Jordan and Steve Jobs - would still pursue perfection even though they know it is unattainable. It's a strange juxtaposition.

Expand full comment

A segment of the “best of the best,” perhaps. The monomaniacal segment...

Expand full comment

So, uh, by very definition "pursing perfection" demands that we criticise philosophies, so that we can get as a close to perfection as possible. This includes EA.

Expand full comment

This is my favorite piece of your writing to date. I share your perception of utilitarianism, and I thought this was a great, timeless meta-summary of the ways it goes crazy but also why EA is good in the short-term. Much of my conflicts with EA come from their self-professed criteria that, for a problem to be suitable for EA, it must be important, neglected, and tractable. Neglect seems like a cop-out from trying to contribute to popular things that are still incredibly important, and tractable is so subjective that it could justify whatever you want (ex: AI safety doesn't seem tractable because we don't actually know if or how to generate machine consciousness, but math PhDs swear it's tractable and thus we have Yud's AI death cult. Meanwhile, permitting reform seems really important, but math PhDs say "not tractable" because you'd have to build a political coalition stronger than special interests and if it's not solvable with a blog post or LaTex, it's impossible.)

Expand full comment

I think the neglectedness criterion is kind of a proxy that was more important when the movement was very young. When EA is 1000-5000 people engaging in weird philosophy and talking on internet blogs focusing on individual marginal impact makes a lot of sense. When you're a movement moving around millions of dollars this makes less sense, and I get the feeling "neglectedness" is a metric that's dropped off in importance lately.

Expand full comment

I'm neither effective altruist nor utilitarian, but contra your impression that consideration of neglect seems like a cop-out, I found it helpful for realizing that helping others isn't about me but about the recipients -- and when I go back to the times I've needed help I've always been glad to divert that help to someone who needed it more but was neglected, and I was often resentful when I was the one who needed it more but was neglected. This reasoning only works for straightforward kinds of altruism though; many effective altruists seem to tie themselves up in knots arguing for things like longtermism.

Side note: AI safety has nothing to do with machine consciousness, see eg https://bostonglobalforum.org/news/list-of-concrete-problems-in-ai-safety/

Expand full comment

I'm sorry to be so negative, but this is a 19th century criticism of utilitarianism. What is lampooned in this article is "act utilitarianism", which no one argues for. The modern approach is "rule utilitarianism", which is a more workable model that conforms to human moral instinct.

I think there are a lot of things wrong with EA, but this article completely strawmans utilitarian philosophy.

https://en.wikipedia.org/wiki/Rule_utilitarianism

Expand full comment

The EA member I've had the most extensive conversations with about his philosophy and goals told me that it was his goal to have political power to do good with. And in the process of getting there, if he was in a position of less power and required to give the order to drop bombs on innocent people in order to get into a position of greater power, that would be justified for the end goal of attaining power to do good with.

Call it a strawman if you want, but it's not a strawman if people are using it.

Expand full comment

The flat rejection and denial of qualitative properties is the hallmark of modern moral philosophies. Rawls is as guilty of this as any utilitarian or consequentialist.

Without qualitative properties, the issue of "morality" turns into a tidy package of easily quantifiable goods and goals which are, of course, open to the rationalizations and calculations preferred by the so-called "rationalists".

The only thing they lose in the process is any real sense of goodness and its opposite. There are no intrinsically good/better goals or ends. Every goal and act can be assessed with the same flattened amoral logic. Which, when you think about it, is a damn strange way to get to a *moral* philosophy.

When an effective altruist uses the word "good" and relative evaluative concepts like "better", understand that they use them as placeholder terms which are meaningless. They have no referents.

What is a "good state of affairs"? A "good event"? A "good outcome"? Any answer that isn't uselessly vague has to get into precisely which things are good and for what reason. (That way of reasoning hasn't gone very far since G.E. Moore.)

So the EA folks *stipulate* that this or that state of affairs is good or better or whatever. But that's all sleight of hand to change the subject away from morality while hoping nobody notices.

Behind that evaluation is a bare description of the facts. And if I happen not to care about the facts they care about, what of it? It rapidly decays into the usual relativist hash arguing over "which good and whose good"

It's an empty, bankrupt form of ethics that pleases the ruling classes and justifies their projects of managing society at scale. Otherwise I give them little attention.

BTW - as I never tire of mentioning, Philippa Foot first introduced the Trolley Problem in her paper "The Problem of Abortion and the Doctrine of the Double Effect" [edit: mentioned the wrong paper] as an example to show how our moral intuitions differ according to what is done deliberately, or what is allowed to happen, even if the outcomes are identical.

Her point was not that one ought to do this or that in the trolley case. She's illustrating that the intention and motives of the moral agent are highly relevant to how we judge the moral worth of actions, and that the consequentialist's logic cannot get to grips with this.

Effective altruists screech about "good!" and "better!" without having the first glimpse of these problems. This is why they are awful.

Expand full comment

Quite persuasive but it seems ‘Why I Am Not a Utilitarian’ would have been a more appropriate title.

Expand full comment

Wow. That's what is called a hit-piece, I guess?! As for me: I am smashed. - And you should get that prize! - For crazy personal reasons, it had the funny side-effect for me to work as an apologia to be a Christian: It is all right, actually, just keep it LOCAL! - Which is exactly what Jesus said, lol: "Thou shalt love thy neighbour as thyself." (Matthew 22-39) Thy *neighbour*. That could be paying a medical treatment of a baby in Luzon (if you happen to be there or being married to his mother's sister). Or adopting a stray dog in Turkey (source: "Saving Lucy" https://www.amazon.com/Saving-Lucy-girl-bike-street/dp/193771585X - warning: strong stuff!), if she crosses your path and touches your soul.

It may be Newtonian morality/theology, but on that level it shall work well enough. There is no Einstein of Morality, as yet - or is there?

Properly "diluted poison" might not even be a bad thing: Think broccoli. Or Paracelsus: “All things are poison and nothing is without poison; only the dose makes a thing not a poison.” - A faith that wants to cover all - yep, that stinks of poison. Think of Sharia, think Popes, think Maos. But see Mark 12:17:

Then Jesus said to them, “Give to Caesar the things that are Caesar’s, and give to God the things that are God’s.” The men were amazed at what Jesus said.

Jordan Peterson commented: "There you are: Secularism two-thousand years ago. In two short sentences. A miracle."

p.s.: Here is hoping E. Yudkowsky will not run into trouble being called a "pro-lifer".

Expand full comment

The idea of Christianity as incorporating (pun intended) a sort of Newtonian version of morality is very interesting. It fits in with the idea that Christian morality is not based on principle (the Golden Rule is not a moral principle, but a rule of action) but on a Man. (The principle there also involving a rule of action: WWJD.)

One might regard the Trinity as recognizing the Son's operating at the level of human existence, while the Father operates at the quantum level. Not sure how the Holy Spirit would fit into that analogy, since I am not a biolo....uh...physicist. "Spooky action at a distance", somehow? (Not a theologian, either, which should be obvious.)

Expand full comment

A my prof said: "The Holy Spirit is simply - the spirit of God." I found that actually helpful. And consider the Spirit to be operative on the human level, on our spirit. Spiritually. Or Greek maybe: psychologically. Jesus has risen, he sits to the right of the father, I heard - so we can not hope for more than the spirit ( I do not hope for a 2nd coming, John does not sound that enticing). - Mark 3:28–30: "Truly I tell you, people will be forgiven for their sins and whatever blasphemies they utter; but whoever blasphemes against the Holy Spirit can never have forgiveness, but is guilty of eternal sin" (for they had said, "He has an unclean spirit.") - Wished the right Bukowski poems were as easy to locate. In one he comments on H.S.. As rather elusive, I recall.

Expand full comment

I find your comment very interesting, when you talk about neighbour implying talking about distance.

I would like to illustrate something here. When i was around 10, i was gifted a brand new digital watch (a casio). I was proud holding this little piece of technology and showing everyone how call i was. Until i meet an old guy who litterally spoiled all the fun and the cool i had. "Your watch is useless, and less efficient than my old needle watch", As i was perplex about this rude comment, he continued to explain: "when you read your watch, you really need to make an effort to know what is your current time, while when i read my watch, i dont have much effort to know approximatively which time slot i am."

This marked my mind a little, enough that 35 years later i totally understand what he was saying. And how intuitively, he was right.

When i was young in school, we spent lof of time studying time scale. I mean we draw a line with an orientation, and we were asked to order events, this geometrical work, was easy and the feeling that Dinosaurs existed long before Jesus Christ was very intuitive. Ordering, sorting element and spacialy place them to represent this order. So if you had one event on the left, and one event on the right, you dont need to know exactly their date and compute the difference to know that one occured before the other.

But for some random ideological progressive reason, this kind of teaching totaly disappear from french school. And today you face young people who dont know if Dinosaur existed before human landing on the moon. they are unable to sort and order. (may be cause this task is delegated to computer and therefore are not noble enough to be done by a human).

Most Modern cars doesnt have jauge,bar, needle speedmeter, but simply display telemetry using digits. This require mode brain processing to sort the usefull information, therefore creating more friction to use the vehicle. This is kind of anti-pattern of ergonomy.

If you are unable to order, sort, and create a scale, you live in confusion. like those young unable to say if middle age are before napoleon or after.

And i dont know if this was done on purpose, i just can state that this confusion is present. If you are unable to sort and order, then you are unable to measure a distance. everything is either close, or far not by physical reality, but by ideology (or by mimcry).

Why it relate to Jesus, cause indeed you exactly cite his sentence "Thou shalt love thy neighbour as thyself." In french the term neighbour is not Voisin, but it is "prochain" : "Aime ton prochain comme toi meme". Prochain is the close one. But how can you know the distance between you and someone else, if your are in a perpetual state of confusion cause you dont have the geometrical sense of scale, order and distance ?

Child, I was always skeptic in my christian private school, when we had to engage and be very compassionnate about all those children dying of hunger in Somalia, while when i was walking on the street of my city i could see lot of homeless beggar nobody was interested to help and give charity.

This somalian children were just word in my minds, while those homeless peoples were physicaly interacting with me (their putrid smell for instance...).

Why those somalian children should be my neighbourg while those closer homeless arent ?

In french we have this old saying "Charité bien ordonnée commence par soi même", which is fantastic in the way the it defines a scale with a starting point, and a sorting rule for ordering.

Which in english translate with "charity begins at home", This sentence is very geometrical, and cannot be understood if geometrical intelligence is non existant.

I feel the digital world, based on digit, has massively destroyed geometrical world, By reshapping the way we sort, scale and order things. Therefore leads to the total state of confusion that most individuals can feel and express. therefore lowering and weakening the collective intelligence emerging of group dynamics, because what define a group is also a distance between elements.

May be my writing sounds idiot. It's mostly sums of intuition than an academic study of mine.

Expand full comment

Oui, I remember French cars were very avant garde in unusual / digital tachometer-design ;) - and indeed: once you got caring about anonymous ppl. 500 km from your place, you might as well care about ppl 9366 km from you (Paris-Mogadishu on foot). I do not say: Ignore them. I say: Caring for those is very different from caring about the smelly or injured person NEXT to you. (In German, "thy neighbor" is also not translated as "dein Nachbar" (ppl next door) but "Nächster": "the one nearest". ) OTH: Helping out your brothers (friends and family/tribe) - "even the heathens do that". Mtth.: 5,47 - And from my experience: "Helping" our clochards is not as easy. Mark 14,7 - All that said and Christmas near: Here is a far-off charity I like: https://putanumonit.com/2016/04/27/more-power-less-poverty/

tl,dr: In some villages in Kenya everyone will get a fixed amount of extra-money for 12 years. No strings attached, questions asked (but later and just for research)

If you want to read a bit more: “Every registered person will receive 2,280 shillings” — about $22 — “each and every month. You hear me?” The audience gasped and burst into wild applause. “Every person we register here will receive the money, I said — 2,280 shillings! Every month. This money, you will get for the next 12 years.”

Just like that, with peals of ululation and children breaking into dance in front of the strangers, the whole village was lifted out of extreme poverty

If you want to donate tax-efficient: the same guy has a great post here: https://putanumonit.com/2017/12/23/not-a-tax-lawyer/ (US tax law, but German tax code has similar functions)

Expand full comment

I find that this is another instance where we've put lipstick on a pig, but maybe we've added a little blush and some eyeliner. While I agree with your conclusion about how effective altruism can proliferate against all other options (dilution), it's still a pig. Most people who do not adhere to a truth outside themselves are interested in one thing, and one thing only -- maximizing self-preservation. They choose philosophical models that best optimize their chances for long life, health and wealth, at all other costs. They'll tell you it's because their philosophy benefits humanity the most, but when you dig deep it's only because they want to land in the majority camp, where self-preservation is optimized. Nobody wants to be the organ donor. They all want to be the organ recipient.

Expand full comment

I think some on the thread have responded similarly in describing the resistance to organ donation. My take is that philosophy in this case is MERELY a construct of this top of foodchain alpha predator homo-sapien. We seem to default to self-preservation and hence a judgmental term selfish. We have FINALLY thrown off the limitation of our lizard-brain in the back and at least in the scientific sectors accepted that organ growth in sacrificial hosts like pigs is the only relevant approach to organ donation in the first place This can ONLY occur in the frontal cortex which offers the only means to transcend the animal that we are. All of the rest is merely platitudes which deny our animal and selfish nature with a fortunate development of that broad forehead cortex.

Expand full comment

> Nobody wants to be the organ donor. They all want to be the organ recipient.

And having 5 recipients for every 1 donor maximizes the probability that any given person gets to have that selfish preference satisfied, does it not?

Expand full comment

I'm not sure I understand, but my point was in reference to this:

"E.g., what if there’s a rogue surgeon who has five patients on the edge of organ failure, and goes out hunting the streets at night to find a healthy person, drags them into an alley, slits their throat, then butchers their body for the needed organs? One for five, right?"

Someone looks at the numbers and says, yeah, that's definitely a philosophy I can support because look how many people it can help. It supports the general, collective well-being of humankind. They're only saying that because they already need an organ donor, and they haven't been chosen to be the one murdered. Nobody who is healthy both mentally and physically would allow themselves to immediately be placed in the camp of having their organs harvested.

Expand full comment

You say that people are selfish, and would only want to be the organ recipient and not the donor. I agree.

Let's say I'm completely selfish and don't care at all about other people. If I live in a society that avoids that sort of organ trade, then I have a higher chance of death than if I live in a society that does make those organ trades. The chance of having my life saved due to someone else's organs being harvested is 5 times as high as my chance of being the one selected to have to give up organs, so as a completely self-interested agent, I would prefer to live in a society that performs those sorts of organ trades.

Expand full comment

The 5 to 1 ratio is not statistically honest, which is why it's only a thought experiment. You would need to account for gender, age, heredity, location, weather, crime... to come to a real number. Even then, there will always be a statistical bias. That's why it's better to be able make that choice instead of having it forced upon you. I believe that's the point is that when you go purely utilitarian you will end up down a rabbit hole of terrible outcomes.

Expand full comment

No part of my argument depended on the exact ratio, so I'm confused as to why you think it's relevant. If there's a policy society could implement that would increase everyone's life expectancy (and has no other effects), I'm in favor of that policy.

Expand full comment

Sure, so am I, but in this particular case it would have drastic effects for someone. They would have to lose their life to the murderous doctor.

Expand full comment

This post effectively captures why I went from being strongly anti-EA in the mid-2010s to being sympathetic to (though still critical of) the movement. Back then, the movement really did seem like a bunch of utilitarian fanatics, but now it has, as you say, diluted the poison. I now feel I can be sympathetic to much of EA’s goals, and even contribute to them if I ever have the opportunity, without being a utilitarian.

I do have concerns about the movement’s ongoing shift to longtermism though. While focusing more philanthropic attention on pandemic prevention, for example, is good, I’m uneasy about the increasingly strong emphasis on apocalyptic AI scenarios.

Taking the outside view, I see a bunch of analytic philosophers and tech people (and those with similar dispositions and social circles) convincing themselves that interesting intellectual work on AI, which they are uniquely qualified to do, is by far the singularly most important thing to work on. When I read EA discourse online, I can just tell that EAists find AI more fun to talk about than anything else. I also see worrying signs of groupthink about this topic in the space, such as the fact that believing in short term timelines for the emergence of superintelligent AI seems to be becoming a strong signal of in group membership. Ajeya Cotra, one of the leading EA thinkers on AI timelines, straight up admitted that she is biased against people who argue for longer AI timelines and doesn’t find their arguments as salient [1]. I give her credit for admitting as much, but it’s a worrying sign.

To be clear, I think AI safety in a broad sense is a real issue that is neglected by wider society. What I’m specifically worried about in EA is the near term (in the next 25 years, say), “hard takeoff”, super intelligent AI apocalypse scenarios that seem to be the emerging EA consensus.

1. https://www.alignmentforum.org/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines

Expand full comment

To tie this back into the main post so as to not derail, I think if this is the kind of thing they “dilute” towards, the movement risks squandering much of its potential and becoming no better than any other moral/philanthropic group.

Expand full comment

EA is a variety of self-compounding beliefs glued together with a bit more linear reasoning from a "rationalist/utilitarian" perspective.

Earn-to-Give, is in some sense unique, but when diluted to tithing, just maintains its value a "secular justification" for tithing.

Maximizing the value of charity, on some semi-objective basis like QALYS, does seem like something worthwhile to continue to invest, and innovate upon. Givewell and their ilk, have been genuinely innovative and should live on regardless of EA. That doesn't mean we need global-optimization, and Givewell is a good example of a group that has changed over time to produce of more varied set of optimization functions exposed via their analysis of outcome over intent.

Taking an experimentalist mindset to charity, both to try new things, and to validate existing approaches has been valuable. In any area of charity where the value is in the outcome rather than virtue signaling, regardless of justification, doing that thing more efficiently is probably welcome.

Longtermism is definitely something novel, except for maybe environmental causes, and is worth considering. I'm skeptical about the extreme implications that the utilitarian monster might propose, but netting out different discount function for the future, producing a vibrant discourse on the topic, with tangible investment options is of value to larger society.

Focus on X-risk, is very valuable also do to its current underinvestment and novelty. While things like AI risk are probably eating up too large a percentage of the discourse bandwidth, they don't seem to be actually taking up much of the spending, even in EA circles. Society-wide it is clear we under-rate black-swans. It isn't clear to be we should invest more in those than other near-term issues, but it seems like we are underinvesting even relative to very conservative risk models (such as pandemic risks that are still not getting attention even after the past few years) let alone what X-Risk people are talking about.

Expand full comment

I'm not a utilitarian, and harvesting 1 person's organs to save 5 still seems obviously correct to me. Why is a healthy person more deserving of life than a sickly person? I find it abhorrent to treat different people's lives as though they are of different intrinsic value.

I'm sure you disagree. My point is that different people have different moral intuitions, and simply dismissing anyone who has different values as "warped by dogma" is not likely to convince anyone who doesn't already agree with you.

Expand full comment

I appreciate this reply. I don't want to come off as disagreeing too much, since I agree that there are differences in people's moral intuitions, and your perspective is your own. However, it's an opportunity to outline my own views on exactly this question, which is that I also think differences in moral intuitions are not nearly as great as they seem. In some cases, it's just a failure of imagination (what David Chalmers might call conceivability). E.g., it's very easy to just say "5 for 1" but if you actually imagine the scenario of kidnapping an innocent person with chloroform and taking them into your basement with plastic put up and the sterile table and then butchering them for organs, like if you actually put yourself in that situation, you'd find that your moral intuitions would shift and you wouldn't do it at all. You'd probably vomit in fear and terror at your own self and leave them down there, sleeping. The reason why is because when confronted with the horror of taking an innocent's life you wouldn't just "lose your stomach" you'd begin to question, seriously, if your philosophy is correct. And therefore, in more cases than not, moral intuition is in more widespread agreement than first appears, and it just requires "ideal conceivability" to see that.

Expand full comment

I think I know myself well enough to know that I wouldn't question myself in that situation, but fortunately or unfortunately, I'll probably never find out for sure.

I certainly would experience disgust at the situation, but that doesn't mean it would be unethical any more than being disgusted at dog poop means it's unethical to clean it up.

Note that I don't claim my moral system is in any sense objectively "correct". I just think it would lead to a society I would prefer to live in if it were implemented.

#OmelasDidNothingWrong

Expand full comment

This is why I find the Kant-inspired perspective useful: Would this work out as a general rule that applies to everyone (not just me)?

A world where "every doctor must kidnap, kill and take organs of any healthy individual if by doing so they can get organs for >1 people who need them" is an universal maxim is a world where any random individual is at danger at getting kidnapped for their organs by utilitarian-minded doctors. It does not sound conductive for stable, safe society with human flourishing. For example, everyone would take action to avoid becoming an unwilling organ donor. The rich can buy private security forces, the poor will try to defend themselves with whatever they have. Some would choose to become not-so-healthy to become less likely targets.

Finally, such a world is a world where the doctors spend a considerable effort at kidnapping people for cheap organs instead of coming up with new technologies to without donated organs (whether by medicines to avoid/mitigate organ failure, technological artificial replacements, or lab-grown organs).

Frankly, I believe a "good" utilitarian calculus would uncover this if computed by a perfect omniscient being who actually can correctly compute all the second and third and fourth order effects. However, they are difficult to calculate and the utilitarian way of asking the question outright invites not to consider them. Kant's question "what if this become an universal law" is very helpful here: it cuts directly to the probably "asymptotic" result.

Expand full comment

A world with random kidnappings, seems less plausible than a world that promoted the "noble sacrifice." Such societies have existed with less tangible benefits (sacrifices to the gods, or to war). I could imagine we first would raid prisons (as probably happens in some countries today). And maybe we have some sort of smarter QALYS factor (saving 5 octogenarians, with a teenager, doesn't net, but maybe someone in their 60 with other complicating factors saving 20-somethings does). I'm pretty sure such a world would have other dystopian characteristics and I personally wouldn't want to create such world, but we could probably steel-man a scenario where people born into that society would support it and consider it more noble than our model that lets people die on waiting lists.

Expand full comment
Comment deleted
Sep 5, 2022Edited
Comment deleted
Expand full comment

There's a difference between instrumental value and terminal value. I can help more people by staying alive.

Also as I mentioned, I'm not a utilitarian. I value my own life higher than others. (My statement in my first comment was an oversimplification; I don't have a problem with people valuing some people higher based on personal relationships. It becomes a problem when they're doing it based on impersonal factors like race, health, etc.)

Expand full comment

But then you read reports like this: https://www.tabletmag.com/sections/science/articles/china-killer-doctors which, if true means that Chinese doctors are harvesting the organs of political prisoners which they then put into Westerners (anybody, really) for money. I think that Utilitarian doctors can conclude that being an executioner is perfectly moral -- and don't have moral intuitions that they were doing wrong.

Expand full comment

"harvesting 1 person's organs to save 5 still seems obviously correct to me"

You go first.

Expand full comment

Sadly you can't donate your own organs. I researched it when I wanted to off myself and all suicide methods are incompatible with organ donation, crashing a car close to a hospital is the closest you get but there's no guarantee you'll be harvested.

It's a very interesting question for the utilitarian debate, should states legalize euthanasia for healthy individuals and take their organs? If yes, how do we avoid it slipping into suicide encouragement?

Expand full comment

I'm rather incredulous that a gunshot to the head is incompatible with organ donation other than possibly rendering the victim's eyes unusable....

Expand full comment

You need to die in life support for your organs to be harvestable. Gunshot to the head near an hospital has a chance of you donating your organs if you manage to hurt yourself deadly but in a way that keeps you alive long enough for the ambulance to reach you. Matter of chance really.

Expand full comment

This crystalized a lot of things for me. Thank you. If you're looking for requests for future posts: what other moral philosophies are there? Anything that works better than utilitarianism?

Expand full comment

I do actually have an upcoming post on the moral philosophy around the "Karen" meme - not sure it will answer your questions but will be interesting

Expand full comment

I'm looking forward to it. My heart goes out to all the women named Karen.

Expand full comment

The classic PHL 101 trio:

Utilitarianism: increase the utilons! Consequences > actions

Deontology: make concrete moral commitments and act accordingly. Actions > consequences

Virtue Ethics: develop good character traits and capabilities, and you will instinctively do good.

Habits > everything else

Expand full comment

Ooh, I hadn't heard of virtue ethics. Sounds like "As a Man Thinketh"? That sort of thing? I'll go look for books about it. Thank you!

Expand full comment

Functional Decision Theory is an interesting idea. Eliezer Yudkowsky and Nate Soares 's paper here: https://arxiv.org/pdf/1710.05060.pdf

It suffers from the same problem as all other moral theories, namely if we disagree about 'this is a good outcome' then we are probably going to have to agree to disagree. The problem isn't that either of us needs a software patch in our 'make a moral decision' algorithm, to get to to more closely align with an objectively verifiable moral calculus, but that there is no objectively verifiable moral calculus to align with. You cannot use the scientific method to find moral truths in the same way you can use it to find empirical truths.

Expand full comment

Oh, and thank you for the reference on FDT!

Expand full comment

It is rather neat. And ties into 'the reason you should mostly act like a Deontologist is because the promises made leads to a better result', which is a lot more satisfying to people like me than 'because these are universals, because I said so (as did the God I don't believe in, etc.)'

Expand full comment

Nice. I'll read up on deontology too :)

Expand full comment

I suppose I must agree that you can't build an empirical "goodness detector." But at the same time, most people do agree on the answers to most basic moral questions. Even the ways they disagree seem to me to hint at deeper rules. What if humans' moral intuitions emerge from moral laws the same way birds' flight emerges from physics? If we *didn't* evolve our moral intuitions, then where could they have come from?

Expand full comment

A society of birds who believe that their moral intuitions are universals because they emerged from their consciousnesses in the same way their flight emerged from physics are going to have a very hard time of it when their society grows to encompass bats, flying squirrels and beetles, moreso when the beetles start saying that birds need to fly (and make moral decisions) like beetles 'because there are so many more of us'.

This is the problem that I have with Utilitarianism. In its most diluted form, all it says is that actions have consequences and that people need to imagine and consider these consequences when making decisions. But even the most passionate Deontologist or Virtue Ethicist believes this. You have to go pretty far down the 'there is no causality' road of certain post modernist thinkers and mystics before you will find anybody who believes otherwise. So completely dilute Utilitarianism is probably true, but banal.

Once you go beyond that, you end up with somebody concluding that their notion of the Utility Function should have precedence over somebody else's idea. If we can all agree to disagree, then things are ok. But if you can believe Tablet magazine here: https://www.tabletmag.com/sections/science/articles/china-killer-doctors the Chinese are already killing people in order to harvest their organs. And they sell this service to patients, worldwide, who are willing to pay.

The inheritable Polycystic Kidney Disease runs through the family I have married into, and thus kidney transplants are a part of the present and future condition of the people I love best in the world, whose continued existence matters more to me than any other person's existence, and some of whose worth to society can probably be calculated in some Consequential way to be on the high side. One can argue as much as one likes that the world would be a better place with my loved ones in it rather than some particular vile prisoner of the state in China guilty of the most heinous crimes -- and I would agree with you. But this doesn't get my relatives on a plane to China, complicit in the murder. While my morality doesn't let me go around slaughtering Consequentialists, even if it would make the world a better place, I think that their morality might demand that they kill me if I ever were deemed that important.

It plays out in other questions, as well. Many, many people are willing to lie 'for the common good'. Fine when you have Jews hidden in your basement and the Nazis come calling. But they don't seem to stop there. Recently, they seem willing to say that the truth is less important than 'the narrative' and brand those who disagree with the narrative as 'spreaders of misinformation' even when all one has done is take the existing government released statistics, stuff it into the computer graphing library Pandas https://pandas.pydata.org/ and made some nice graphs of the thing.

C. S. Lewis wrote: "Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron's cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience."

I don't know whether the Ethical Altruists have measures in place to keep from becoming Tyrants. But if they are relying on 'nothing here but us birdies' then they will discover they needed some precisely when the beetles show up.

Expand full comment

Thank you for engaging me so thoroughly! You're really helping me develop my understand here. But is it okay that we're taking up so much of the space in Erick Hoel's comments section?

Expand full comment

I know I don't say much about my moderation policy but long digressive discussions are encouraged, as long as it's related in some loose way to the original subject matter or concerns the surrounding topic

Expand full comment

Why thank you. What we really need is a pub, and beer (unless Daniel doesn't drink) and a blackboard. But Bulgaria and Sweden are, alas, too far from each other.

Expand full comment

Aye aye, sir! :)

Expand full comment

I'm glad you brought up other flying animals! Birds, bats, and insects (and pterosaurs and humans) all have different ways of flying, but those ways are still underpinned by the same physics. Similarly, could the different moral systems we see and imagine be underpinned by the same laws?

Expand full comment

Certainly. If there are moral universals, and we discover them, then all moral systems would have to be underpinned by these laws. But this is a bit reminiscent of the distinction between a Platonist idea of mathematics (we discover new mathematical Truths when we prove new theorems) various Constructivist ideas whereby mathematicians _create_ the new Truths when we prove new theorems). As long as the same mathematics underpins it all, does it matter? (This debate is very far from settled. You can get a nice Phd thesis out of 'what did she mean (or denote) by 'same mathematics' there, and was that even meaningful?' if this is the sort of thing that floats your boat.)

But we have to consider the possibility that there are no moral universals, or that they are impossible to prove. There are just too many people and societies who have thought that slavery was both moral and a natural part of human nature, that women are a resource that should be owned, etc. for us to conclude that the things we find moral must be universal law. That's 'nothing here but us birdies' which I find dangerous.

Expand full comment

Re: Constructivist ideas: is that like John Archibald Wheeler's "negative 20 questions"?

When you talk about historical slaver societies, I can see two ways for us modern people to approach them. Either we can say "us modern people practice superior morality to those historical slavers; morality has progressed" or we can say "we practice different morality from them; morality has changed." I have to admit I want the former to be true, because it implies that the future will be (or at least can be) better than the present. How can I be hopeful about the future if morality is just freely-shifting fashion? Or is there some third alternative?

After a few false starts, I think I see what you mean about "nothing here but us birdies." In the birds metaphor, would that be "bird flight is natural and good, but the way insects fly is unnatural and bad."? I think that would be a mistake, but I think it would also be a mistake for the birds to say, "in a world that contains birds and insects and even hot air balloons, who can say what 'flight' is?" I'd prefer the birds to say "there are different ways to fly, what do they share? How can we fly better?"

Expand full comment

"Does computing the expected utility feel too cold-blooded for your taste? Well, that feeling isn’t even a feather in the scales, when a life is at stake. Just shut up and multiply."

If we're leaving feelings out of it, and being cold-blooded calculators of utility, then why on earth am I supposed to care enough to bother doing the calculation? The entire idea of helping others is rooted in our feelings. Denying these undoes the entire project.

Asking people to treat others humanely by denying their own humanity has to be one of the silliest philosophical projects that man in all his flawed humanity has ever devised.

Expand full comment

If effective altruism is only as good as Newtonian physics, that's not a criticism but an endorsement.

The engineers who build your cars and airplanes are working off Newtonian physics, not Einsteinian. Newtonian physics works great, as long as you remember not to use it up at relativistic speeds or down at quantum-mechanical sizes. At ordinary life scales, Newtonian is the right tool for the job.

Likewise, if you're a donor who wants your individual gifts to make a difference, at ordinary life scales, effective altruism is the right tool for the job.

I suppose there are people who really advocate total social transformation on the basis of napkin math and effective altruism. But why do I have to believe them just to use effective altruism for my personal charity?

It's a false choice! I'm not obliged to believe an author is God, just because I like and learned a lot from his books. And I'm not obliged to ask effective altruism to give me answers on a scale no single social program ever has.

Using an effective altruist mindset has limits, just like using Newtonian physics. But I'm glad I have something sharper than superstition and easier than Einstein to build bridges and airplanes with. And I'm likewise glad to have better tools to guide my charity.

I just don't see a need to believe that effective altruism is the Einstein of charity. It's pretty great just as the Newton.

Expand full comment

You say you aren't sure what a practical solution could be. I think one solution is that instead of advocating for the greatest good for the greatest number, as utilitarians do, we could advocate for measuring outcomes. The work coming from GiveWell isn't original because it's utilitarian so much as it's original because outcomes were measured. Benefactors started measuring how school attendance was impacted by deworming, for instance, instead of just assuming that the best way to increase school attendance would be to hire more teachers.

Maybe EA should try to distance itself from its assessments - to make more neutral statements like "if we do X then Y will happen" rather than "we should do X". However then you get into questions like "how do you define EA?" which I don't know a real answer to. I kind of have my own beliefs independent of "EA doctrine", but is there such a thing as EA doctrine anyways? I'm not sure.

Expand full comment

Back in 2011, SSC wrote a section in his Consequentalism FAQ entitled, "Wouldn't consequentialism lead to [obviously horrible outcome]?" (https://web.archive.org/web/20161115073538/http://raikoth.net/consequentialism.html)

The answer is of course not. If you think that utilitarianism would lead to [obviously horrible outcome], then by definition that wasn't utilitarianism! You were leaving something out of your equation.

The world with rogue surgeons hunting people on the streets is a world where billions of people would be outraged and terrified. That all should go into the equation. It is not a world that utilitarianism would recommend.

You refer to a lot of this as epicycles, but really this is a simple and straightforward application of utilitarianism. When utilitarianism calls on us to estimate the consequences of our actions, that means *all* of the consequences, including second and third order consequences. Those aren't epicycles.

Expand full comment

But they are not applications of utilitarianism. They are fundamentally non-utilitarian, which points to the weakness of the philosophy that it has to constantly import things to prevent "let the surgeon kill but help him keep it a secret" or "help the necrophiliac abuse the dead dog and keep it a secret." There is no utilitarian reasoning for not pursuing those, other than hand-waving stuff about "maybe the actions out until the end of time would be bad" but there's no way to know any of that, as actions often have consequences we don't and can't foresee - maybe Hitler actually did "good," according to the definition of considering all higher-order consequences.

Expand full comment

I'm not sure utilitarians need to import anything here. In the thought experiment with the surgeon's secret or the necrophiliac's secret, a good utilitarian would bite the bullet and say, "Yes, if hypothetically it could truly be a secret that would be ok", but then they would quickly add, "However, attempts to keep that secret in the real world would probably fail, and so we would condemn it for those reasons".

All of the above comes from the standard utilitarian toolkit of doing your best to estimate utility and the probability of various outcomes. Sure, some of it may be hand-wavy insofar as it's hard to predict the future, but it's not hand-wavy in the sense that it's importing separate frameworks.

Expand full comment

I definitely agree that they'd say that, but ultimately being committed to helping necrophiliacs and serial killers if one can get away with it means it's a bad moral philosophy (i.e., repugnant). I'm just willing to throw it away at this point, whereas others try (imo) to save an unsaveable patient.

Expand full comment

I have issues with utilitarianism as well, though don’t think that this is a convincing argument against it.

I have the view that appeal to moral intuition (including that a conclusion is repugnant) is only useful when you would expect it to provide additional information, which can only occur when the thought experiment is actually achievable in the world we live in. If it could never be realised in our world (such as a cabal of serial killer organ harvesters that remains secret forever), why would we expect our moral intuition to provide reliable information?

There’s an analogy with this to physics. The idea that quantum mechanics seems absurd isn’t a good argument against it. Not because physical intuition doesn’t exist or is worthless, but because you wouldn’t expect our physical intuitions to provide accurate information one way or another when dealing with scales that we have never experienced.

Expand full comment

There's also the issue that if you build a reliable societal mechanism for keeping serial-killer surgeons hidden, you're now facing the risk those same secret-murder tools could be co-opted by non-benevolent serial killers, who are empirically the more common type.

Expand full comment

I think you might be confusing consequentialism with utilitarianism. Consequentialism just means "it's moral to take actions that lead to whatever consequences I would like best", so it's obviously nonsensical to say that consequentialism has bad consequences. Utilitarianism is a more specific philosophy that dictates what consequences we should want to bring about, and many people disagree with those consequences actually being good ones.

Expand full comment

This approach places overly-ambitious stock in a fallible human’s ability to predict second, third and nth order effects of their supposed utilitarian action.

Expand full comment

Of course we'll get a lot of this wrong. It's extremely hard and error prone. The point is just that people should *try* to think about these consequences more, and rely less on emotion-based heuristics. If we think more about the consequences, we'll probably get better outcomes.

How much should we try? That also has a utilitarian answer. Enough to improve the probability-weighted outcomes before they are outweighed by the planning costs. And on that note, one of the many admirable goals of EA is to research and evangelize these consequences as best they can, so that not everyone has to duplicate the research on their own.

Expand full comment

People wouldn't be outraged and terrified if they were all utilitarians. If anything they'd be comforted by the knowledge that there would be a large supply of donor organs available if they ever needed one.

The only workaround I could see would be to keep a small secret society of utilitarians to decide what to do, keeping themselves away from horrifying decisions by factoring into their calculations the sentiments of a much larger population of people with more normal beliefs.

That doesn't seem right, though.

Expand full comment

This is excellent.

My intuitions made me averse to utilitarianism prior to reading any pop ethics repugnant conclusions. I recall being presented with trolley problem as a freshman philosophy major and thinking, “I don’t think I would pull the switch, who am I to decide who lives and dies?” My intuitions are perhaps more legalistic, as the law recognizes a difference in culpability between acting (pulling the switch) and failing to act (not pulling the switch). It is perhaps no surprise that I am a good old fashioned due process oriented classical liberal!

While it may seem like a simple math problem to many, for me it’s obvious that the logical next step is to start murdering homeless people and harvesting their organs…I agree wholeheartedly with your statement “I’m of the radical opinion that the poison means something is wrong to begin with.” For me, totally non intuitive consequences always signal false premises somewhere. Ultimately philosophy doesn’t have anything else to go on but our intuitions (and of course checking them against logical and empirical reality).

So as you say there are domain specific utilitarianisms that are fine, they amount to being effective or efficient with respect to some predetermined (and justified) goal, but utilitarianism cannot provide that goal. It’s no master morality.

Expand full comment

Out of curiosity, Eric, do you subscribe to a meta-ethical position? (I still recommend The Moral Problem by Michael Smith for a technical but readable introduction.) I was a 22 year old philosophy major, I considered myself a moral realist. Now, as a 36 year old professor of a social science, I’m inclined to lean more toward moral anti realism. I don’t think this ends up in total nihilism, in fact I think something like virtue ethics has always appealed to me more than the deontology/consequentialist debate that has been raging since the Enlightenment. This might be because it focuses on the individual and their character, which always made sense to me more than population level ethical inquiries that all seem to lead to repugnant conclusions. But I’ve become quite skeptical that there is a (social) moral theory that is “correct” in the same way that quantum physics is correct (at least provisionally). I do like Phillips Foot’s idea of morality as a system of hypothetical imperatives, it becomes more like rationality with some shared value premises for living together--but without making any strong metaphysical commitments. Probably whether this is realist or anti realist is a bit of semantics. Which perhaps everything is (see....Wittgenstein!).

Expand full comment

Great question SZ. I completely agree. I hint at my own beliefs when I talk about what a miracle it would be if all situations could be ordered in terms of their morality. The anti-realist stance seems far more likely to me. It's funny because EA and utilitarians are often "scientific" in their worldview - and yet, within science, how could there possibly be a theory of morality that is true in the way a theory of quantum physics is? It just seems obviously unnecessary for the rest of science.

Expand full comment

Very interesting read! I’m not that familiar with EA, but as a summary I seem to understand that ‘diluting’ the poison into already (generally) agreeable moral stances does not add anything to the moral debate. Although with a nice packaging and a strong online presence, the movement is pushing things forward. Who would morally object to that?

Expand full comment

This was a great piece! I bought into EA way back in the early 2010s and it never really sat quite right with me. Once you understand the limits of utilitarianism you see how it stops making sense past a certain scale. Agree with your tips for them too!

Expand full comment