Jan 30·edited Jan 30Liked by Michael Huemer

Back in ancient times, when I was an intern at Cato, an esteemed scholar argued against utilitarianism by saying: "It tells us we should allow the Holocaust to happen for a net benefit of a dime!" Tactless whippensnapper me immediately said something like: "Come on, that's just a verbal trick that hopes we ignore the meaning of 'net'. It expects us to feel the huge wrongness of preferring a dime (gross) over stopping the Holocaust, when what we're really talking about is having to choose between two Holocausts. Why wouldn't we choose the one that's slightly less bad?"

For similar reasons, I distrust people's intuitions (even my own!) about "lives barely worth living". I think we use that expression and similar ones in natural language to describe lives that actually aren't worth living. (But we don't say that, because it's taboo.)

I'm far from an antinatalist, but I think Benatar is partially correct, to the extent that there are many more net negative lives than we ordinarily admit, or, to put it another way, if we were to observe a bunch of lives that are *actually* barely worth living, this wouldn't strike our intuitions as repugnant at all.

Expand full comment
Jan 31·edited Jan 31

For the counter to the alternative at 3.3, I'd simply invoke the asymmetry between pleasure and pain (the weak version, not the one that commits you to anti-natalism).

Intuitively, the absence of pain is good even when that absence means the being having not existed. But the absence of pleasure is not bad when that absence means the being having not existed, because then there is no one to begin with that'll suffer from not being brought into existence. It's not a wrong committed on anyone because there isn't anyone.

To make it more concrete, one would see as a moral wrong to bring into existence someone that will lead a miserable life, in a way one wouldn't towards refraining from bringing into existence someone that will lead an amazing life.

So, A is better than Z because it's better for everyone in A. As for the additional people not present in A, not existing is not a moral wrong relative to happily existing.

But in the objection Z* where people in A have slightly better lives, and a high enough number of additional people who lead torturous lives are introduced, Z* isn't preferable to A. Because with regards to pain, not existing IS actually a moral good relative to miserably existing.

Expand full comment

Can I just say that the repugnant conclusion doesn't seem at all repugnant to me? Like, am I alone here or something? but it seems correct to me...

Expand full comment

Your example of II does not correspond to my reading of II.

Everyone should be better off, but equal, so you would have to equalize at 101, not at 3. Otherwise some are worse off.

Or another way. World B is only better than World A if everyone thinks B is better than A. If even one person disagrees, you cannot make a determination about which is better.

Expand full comment

“It’s impossible that A>B>C>A” sounds like someone’s never played rock-paper-scissors

Expand full comment

Egalitarianism (of utility) is empirically false.

Literally no one is indifferent to the question who gets the utility. Certainly no human being who has ever lived or will ever live. Even if we make AGI or meet intelligent aliens, it's a safe bet they won't be indifferent to this question either.

When a moral philosophy starts with a normative assumption that contradicts the genuine preferences of *all* the beings who will ever engage with moral philosophy in any way, it should probably be disqualified.

(For illustration, imagine a hypothetical magic ritual that tortures you for 10 years, but also prevents the same amount of pain for chickens, plus produces one cookie for one random peckish person in the far future, and all else is somehow kept equal. Not even Peter Singer would cast that spell on himself, and neither should he be forced to under non-crazy social norms.)

Expand full comment

If you look at research like https://journals.sagepub.com/doi/abs/10.1177/0956797610362671?journalCode=pssa aka "Money and Happiness: Rank of Income, Not Income, Affects Life Satisfaction", you will see it is relative wealth and not absolute that appears to increase utility. But according to Rob Henderson (see today's https://robkhenderson.substack.com/p/thorstein-veblens-theory-of-the-leisure), this preference appears to be restricted to the higher classes.

Based on this I think axiom #2 is not true for the upper classes, which is probably why every time efforts to equalize wealth is tried it fails -- the upper classes won't stand for it. Which I think we all kind of intuitively know already!

Expand full comment

I don’t understand the The Person-Affecting Principle criticism, I thought the point was there would be no lives, so if you add 100 trillion that lives are in agony isn’t that a contradiction of terms?

Expand full comment

I reject premise 2. Moving from A+ to Z is undesirable. Also, moving from Z to A+ is undesirable. So there goes premise 3 as well.

I also reject the idea that we can mathematically examine (sum, average, compare, etc) interpersonal utility. There are no such units. To say nothing of the impracticality of measuring such things.

Expand full comment

“ Equalizing welfare while also adding to everyone’s welfare makes the world better.”

This is a bit misleading, or ambiguous at least. I originally interpreted it to mean, bring everyone's welfare up to the level of the person with the most welfare. I assumed it would not involve reducing anyone's welfare. But from context, this is not what was meant.

Wouldn’t it be clearer to put it this way?

“ Equalizing welfare to a bit above the average makes the world better.”

But then, I would disagree. If we take wealth, income, or consumption as a proxy of welfare, it at least could be equalized, but it might not make the world better in terms of utility. If not utility, what terms should we use for comparison?

If we just use welfare to mean utility directly, this requires us to make direct interpersonal comparisons of utility, which is a subjective phenomenon. To compare different persons' utility requires us to translate them uniquely and determinately into an intersubjective quantity that everyone accepts as uniquely significant. If this is possible, I don’t think the method has yet been discovered. I’m not sure it has even been seriously sought. Currently, there are many methods that could provide contradictory answers to various questions. We have only so much data, some of which is of questionable reliability, with which we must induce a formula that has to cover every possibility in a convincing way. And this takes for granted that we know what criteria to use to decide which formula is best. The problem seems radically underdetermined.

We could say, who cares? Scientific theories are also radically underdetermined, and we use them happily! When we get new pertinent data, we update them. Why is interpersonal comparison of utility different?

It is different, because settling for an approximation means that people whose utilities are miscalculated don’t count. But they should. Scientific theories are tools that individuals and groups can use as they like, not decisions about what everyone on Earth must sacrifice.

Perhaps this problem would remain even if we could compare utilities. Maximizing the average ignores the specific cases. So it could easily count one world as better than another, when no one would agree after learning the details. So even if it were accurate, it would be too crude.

There are times when something approximating a consequentialist calculation is inevitable. Add a bit to any particular line item in a hospital budget, and you will change the outcomes in terms of patients cured, patients harmed, and lives saved; and the people making those decisions need to have some idea what the margins look like, without without having the luxury of a market-like mechanism to tell them what the trade-offs are. They have to make a decision, and it will have repercussions. But even there, they have to decide whether they want to maximize lives saved, or the estimated years of life saved, or the quality of life, etc. But no matter what measure they use, there are lines we don’t want them to cross, even if it makes the number go up. We still demand that patients give informed consent to medical decisions, even if that lowers the expected measure of effectiveness. They are the ultimate judges in that process.

This entire exercise assumes that a central authority could make trade-offs that reduce the welfare of some to increase the welfare of others accurately and without self-dealing or bias. Our experience with actual institutions and organizations should convince us that if this were a coherent and feasible goal, we are far from developing the techniques required for achieving it.

Expand full comment