22 Comments
Jan 30, 2023·edited Jan 30, 2023Liked by Michael Huemer

Back in ancient times, when I was an intern at Cato, an esteemed scholar argued against utilitarianism by saying: "It tells us we should allow the Holocaust to happen for a net benefit of a dime!" Tactless whippensnapper me immediately said something like: "Come on, that's just a verbal trick that hopes we ignore the meaning of 'net'. It expects us to feel the huge wrongness of preferring a dime (gross) over stopping the Holocaust, when what we're really talking about is having to choose between two Holocausts. Why wouldn't we choose the one that's slightly less bad?"

For similar reasons, I distrust people's intuitions (even my own!) about "lives barely worth living". I think we use that expression and similar ones in natural language to describe lives that actually aren't worth living. (But we don't say that, because it's taboo.)

I'm far from an antinatalist, but I think Benatar is partially correct, to the extent that there are many more net negative lives than we ordinarily admit, or, to put it another way, if we were to observe a bunch of lives that are *actually* barely worth living, this wouldn't strike our intuitions as repugnant at all.

Expand full comment
Jan 31, 2023·edited Jan 31, 2023

For the counter to the alternative at 3.3, I'd simply invoke the asymmetry between pleasure and pain (the weak version, not the one that commits you to anti-natalism).

Intuitively, the absence of pain is good even when that absence means the being having not existed. But the absence of pleasure is not bad when that absence means the being having not existed, because then there is no one to begin with that'll suffer from not being brought into existence. It's not a wrong committed on anyone because there isn't anyone.

To make it more concrete, one would see as a moral wrong to bring into existence someone that will lead a miserable life, in a way one wouldn't towards refraining from bringing into existence someone that will lead an amazing life.

So, A is better than Z because it's better for everyone in A. As for the additional people not present in A, not existing is not a moral wrong relative to happily existing.

But in the objection Z* where people in A have slightly better lives, and a high enough number of additional people who lead torturous lives are introduced, Z* isn't preferable to A. Because with regards to pain, not existing IS actually a moral good relative to miserably existing.

Expand full comment

Can I just say that the repugnant conclusion doesn't seem at all repugnant to me? Like, am I alone here or something? but it seems correct to me...

Expand full comment

Your example of II does not correspond to my reading of II.

Everyone should be better off, but equal, so you would have to equalize at 101, not at 3. Otherwise some are worse off.

Or another way. World B is only better than World A if everyone thinks B is better than A. If even one person disagrees, you cannot make a determination about which is better.

Expand full comment

“It’s impossible that A>B>C>A” sounds like someone’s never played rock-paper-scissors

Expand full comment

There are lots of relations that transitivity doesn't apply to. For example, there's nothing contradictory in football team A being favored to beat B, B favored to beat C, and C favored to beat A. That's why there's an explicit premise that the "better than" relation does have transitivity. And Huemer even gave one (of many) arguments why that seems pretty undeniable, in this post.

Expand full comment

If you look at research like https://journals.sagepub.com/doi/abs/10.1177/0956797610362671?journalCode=pssa aka "Money and Happiness: Rank of Income, Not Income, Affects Life Satisfaction", you will see it is relative wealth and not absolute that appears to increase utility. But according to Rob Henderson (see today's https://robkhenderson.substack.com/p/thorstein-veblens-theory-of-the-leisure), this preference appears to be restricted to the higher classes.

Based on this I think axiom #2 is not true for the upper classes, which is probably why every time efforts to equalize wealth is tried it fails -- the upper classes won't stand for it. Which I think we all kind of intuitively know already!

Expand full comment

This is intended to be held all else equal--it's not about equalizing wealth but utility.

Expand full comment

How is that different? Can you give an example? Tia!

Expand full comment

The claim is that if you have two people with equally happy lives of 5 utility, this will be better than one person with 9 and one with zero, where average and total are higher and distribution is more equal.

Expand full comment

Got it, thank you!

Expand full comment

I don’t understand the The Person-Affecting Principle criticism, I thought the point was there would be no lives, so if you add 100 trillion that lives are in agony isn’t that a contradiction of terms?

Expand full comment

I reject premise 2. Moving from A+ to Z is undesirable. Also, moving from Z to A+ is undesirable. So there goes premise 3 as well.

I also reject the idea that we can mathematically examine (sum, average, compare, etc) interpersonal utility. There are no such units. To say nothing of the impracticality of measuring such things.

Expand full comment

"Moving from A+ to Z is undesirable. Also, moving from Z to A+ is undesirable. So there goes premise 3 as well."

I don't hink that follows. You're not rejecting transitivity, because you're not asserting that A+ > Z and Z > A+, full stop. Rather, in taking an anti-change position , you're including an additional piece of information, namely whether we're currently in A+ or currently in Z, and claiming that this changes the ranking. I don't think I agree with that, but it's reasonable, and doesn't violate transitivity.

Expand full comment

“ Equalizing welfare while also adding to everyone’s welfare makes the world better.”

This is a bit misleading, or ambiguous at least. I originally interpreted it to mean, bring everyone's welfare up to the level of the person with the most welfare. I assumed it would not involve reducing anyone's welfare. But from context, this is not what was meant.

Wouldn’t it be clearer to put it this way?

“ Equalizing welfare to a bit above the average makes the world better.”

But then, I would disagree. If we take wealth, income, or consumption as a proxy of welfare, it at least could be equalized, but it might not make the world better in terms of utility. If not utility, what terms should we use for comparison?

If we just use welfare to mean utility directly, this requires us to make direct interpersonal comparisons of utility, which is a subjective phenomenon. To compare different persons' utility requires us to translate them uniquely and determinately into an intersubjective quantity that everyone accepts as uniquely significant. If this is possible, I don’t think the method has yet been discovered. I’m not sure it has even been seriously sought. Currently, there are many methods that could provide contradictory answers to various questions. We have only so much data, some of which is of questionable reliability, with which we must induce a formula that has to cover every possibility in a convincing way. And this takes for granted that we know what criteria to use to decide which formula is best. The problem seems radically underdetermined.

We could say, who cares? Scientific theories are also radically underdetermined, and we use them happily! When we get new pertinent data, we update them. Why is interpersonal comparison of utility different?

It is different, because settling for an approximation means that people whose utilities are miscalculated don’t count. But they should. Scientific theories are tools that individuals and groups can use as they like, not decisions about what everyone on Earth must sacrifice.

Perhaps this problem would remain even if we could compare utilities. Maximizing the average ignores the specific cases. So it could easily count one world as better than another, when no one would agree after learning the details. So even if it were accurate, it would be too crude.

There are times when something approximating a consequentialist calculation is inevitable. Add a bit to any particular line item in a hospital budget, and you will change the outcomes in terms of patients cured, patients harmed, and lives saved; and the people making those decisions need to have some idea what the margins look like, without without having the luxury of a market-like mechanism to tell them what the trade-offs are. They have to make a decision, and it will have repercussions. But even there, they have to decide whether they want to maximize lives saved, or the estimated years of life saved, or the quality of life, etc. But no matter what measure they use, there are lines we don’t want them to cross, even if it makes the number go up. We still demand that patients give informed consent to medical decisions, even if that lowers the expected measure of effectiveness. They are the ultimate judges in that process.

This entire exercise assumes that a central authority could make trade-offs that reduce the welfare of some to increase the welfare of others accurately and without self-dealing or bias. Our experience with actual institutions and organizations should convince us that if this were a coherent and feasible goal, we are far from developing the techniques required for achieving it.

Expand full comment

I had the same objection to axiom 2. It's clearly not self-evidently true, and IMO is just flatly false.

Expand full comment

I think axiom 2 is true (and indeed Mike has a separate paper with a very convincing argument for it - google "Huemer non-egalitarianism"). But it just doesn't seem like the RC argument is using axiom 2, at least the way Mike has phrased it. He says: "Equalizing welfare while also adding to everyone’s welfare makes the world better. (Equality is not intrinsically bad.)"

But not everyone's welfare is being added to! A+ has lots of people whose utility will be subtracted from! To legitimize the move to Z, you can't say everyone's utility will be added to; you have to rely on something more abstract, like "Z has higher average and total utility." But this is *precisely* the difference between a Pareto improvement and a Kaldor-Hicks one (the latter describes improvements that improve *net* welfare). I get that part of the argument is that axiom I and II are sufficient conditions for one world to be better than another, but the way axiom II is written here, it's baking in a Pareto-type condition, similar to the one explicitly stated in axiom I, and so functionally axiom I is both necessary and sufficient. And if axiom I is necessary, then you can't get from A+ to Z.

Expand full comment

Agreed - axiom 2 as stated seems true; as used in the argument it seems false, or at least not obviously true.

Expand full comment

Also agreed. Axiom 2 seems true, but Axiom 2 does not apply to the move from A+ to Z

Expand full comment

I admire your brevity. My objection is too chatty and scattered to be convincing.

Perhaps I should boil it down to this: In theory, axiom 2 requires god-like knowledge, impartiality, and omnipotence in order to know what to do and be able to do it. In practice, it is more likely to provide a fig leaf for ignorance, bias, and abuse of power.

Expand full comment

It's easy with these arguments to conflate equalization of utillity with equalization of wealth, but that's a major mistake. A lot of what makes having more wealth than others desirable are zero-sum factors like "attract more mates" and "easily get others to defer to me". Raising real wealth to the level of the wealthiest individual would decrease these sorts of util for them; raising utils to the the level of the highest-util individual, however, would take them into account.

Expand full comment
Comment deleted
Expand full comment
Jan 31, 2023·edited Jan 31, 2023

You seem to have an implicit premise there that "would be morally best to do" is equivalent to "should be expected to do". I don't hold that premise, therefore I have no trouble believing that casting your ritual would be (ever so slightly) more moral than not, while also believing it would be crazy to think anyone would cast it.

Expand full comment