Here, I explain the proof of the Repugnant Conclusion.*
[* Based on “In Defence of Repugnance,” Mind 117 (2008): 899-933.]
1. The “Repugnant” Conclusion
This thesis is often referred to as “the Repugnant Conclusion”:
RC For any number of extremely good lives, there is some (much larger) number of barely-worth-living lives whose existence would be better.
That is, when it comes to worthwhile lives, any decrease in quality can be made up for by a big enough increase in quantity. (Worthwhile lives are those with positive welfare, so the subjects are better off living them than not living at all.) E.g., a world containing a billion joyous souls would perhaps be not as good as a world containing a quadrillion people whose lives were just slightly worth living.
Many people find this counter-intuitive, and philosophers have devised many theories to attempt to avoid it. But all such theories entail even more absurd conclusions. Though many refuse to accept RC (including the person who first proved it, Derek Parfit), RC is among the few non-trivial theorems of modern ethics.
2. The Mere Addition Proof
Here is a streamlined version of the proof Derek Parfit first discovered. Start with three ethical axioms:
I. Modal Pareto Principle: Suppose you have two possible worlds, w1 and w2, where every person who exists in either w1 or w2 would rationally prefer w1 over w2 from the standpoint of self-interest. In that case, w1 is better than w2. (If you think there are non-welfare-related goods or evils, assume they are constant between the worlds.)
II. Non-anti-egalitarianism: Equalizing welfare while also adding to everyone’s welfare makes the world better. (Equality is not intrinsically bad.)
III. Transitivity: If w1 is better than w2, and w2 is better than w3, then w1 is better than w3.
Now consider three possible worlds, depicted in the diagram.
World A has 1 million people with welfare level 100, which is extremely good. Now imagine adding 1 to the welfare level of everyone (bringing them up to 101) and then creating 99 million additional people, each with welfare level 1. Call the result “world A+”.
By definition, positive welfare levels make life worth living, so all the 99 million new people would, from the standpoint of rational self-interest, prefer A+ over A. The original 1 million people also prefer A+ over A. So, by axiom I, A+ is better than A.
Next, imagine starting from world A+, equalizing everyone, then adding 1 unit of wellbeing to everyone. This leaves 100 million people at welfare level 3. Call this world Z. By axiom II, Z is better than A+.
Finally, by axiom III, Z is better than A.
And now you can see that we can construct a parallel argument starting from any population of very happy people, and you can wind up with a world Z where welfare levels are only slightly above 0. So the Repugnant Conclusion is true.
Parfit’s version
I mention how Parfit’s version goes, in case you find it more persuasive. In his original proof, you go through many steps. You start by adding just 1 million people, each with, say, welfare level 90. Then you equalize the people. You’re supposed to intuit that neither of these changes makes the world worse.
Next, you add 2 million people, each with welfare level 81. Then equalize. Etc.
After many stages, you get down to some very large number of people, each with some very low welfare level.
3. Some Alternatives
We already know that every theory that denies RC must have some counter-intuitive consequences, since it must (per the above argument) reject either the Modal Pareto Principle, Non-Anti-Egalitarianism, or Transitivity. Here are a few examples.
3.1. The Average Utility Principle
Maybe the value of a world is determined by the average welfare level in the world, not the total utility.
Some implications:
When deciding whether to have children, research in Egyptology is relevant. If, e.g., the ancient Egyptians were very numerous and very happy, then your having a child would be more likely to make the world worse.
The Sadistic Conclusion: In some cases, it would be better to create some miserable people (with negative utility), rather than to create a larger number of slightly happy people.
Example: You have 1 million people at welfare level 100. You can either add 1 million people with welfare -10 (these miserable people will wish they were dead) or add 2 million people with welfare +10 (these people will all be glad to be alive). On the Average Principle, you choose the first (lowering the average to 45 instead of 40).
Btw, you can show that the Sadistic Conclusion follows as long as you assign any weight at all to average utility.
3.2. Critical Level Theories
Maybe there is some threshold welfare level (strictly greater than 0) at which lives first start contributing to the value of the world. E.g., perhaps only lives with welfare >10 make the world better.
To accommodate the fact that it’s worse to create a life with welfare level 5 than one with welfare 10, we would have to say that lives below the threshold subtract from the value of the world. This leads to:
The Strong Sadistic Conclusion: For any world full of horribly tormented souls, that world is better than some (much larger) world filled with people whose lives are just slightly worth living.
The idea is that, since lives below the threshold subtract from the world’s value, if you just add enough of them, you get more negative value than, say, having a million horribly tortured people with welfare -100. This conclusion is similar to the RC, only much more ridiculous. With the RC, you start by imagining something very good, and then allegedly you can create something better by having a very large quantity of slightly good things. With the Strong Sadistic Conclusion, however, you start by imagining something terrible, then allegedly you can create something worse by having a very large quantity of slightly good things.
3.3. The Person-Affecting Principle
Some people would say that world A is better than Z because A is better for the 1 million people who exist in world A, and it isn’t worse for anyone. It’s not worse for the other 99 million people who exist in Z, because they have no welfare level at all in world A, and therefore A can’t be either better or worse than anything for them.
Notice that this implies that if you could slightly improve one person’s life, and at the same time create 100 trillion new lives that would consist of nothing but pure agony, that would be an improvement. It would be better for the one person, and it wouldn’t be worse for the 100 trillion new people, because they have no welfare level at all in the situation in which they don’t exist, etc.
3.4. Diminishing Marginal Value of Utility
Maybe welfare has diminishing marginal value: The more happy lives there are, the less moral value an additional happy life has.
Note first that there is no obvious theoretical rationale for this. The reason why most things (e.g., money, orange juice, friends, etc.) have diminishing marginal value is that most things contribute less marginal utility (welfare) to your life the more of them you already have. But it’s not the case that utility contributes less marginal utility the more of it you have. The marginal utility of utility is always 1 by definition.
Anyway, this sort of theory entails, again, that Egyptology is relevant to your decision whether to have children. If the ancient Egyptians were very numerous and happy, then you have less reason to have children.
Most theories of this kind also lead to the Sadistic Conclusion, just like the Average Utility Principle.
3.5. Perfectionism
Parfit proposes that there are some goods in life that are lexically superior to (or infinitely better than) others. E.g., the joy of listening to Mozart’s music for some time (?) is superior to any amount of mild pleasures. If you imagine world A containing some of these “best things in life”, and world Z containing only lesser goods, then this explains why A is better than Z, and it continues to be better no matter how big the population of Z.
But we needn’t imagine things this way. Maybe the people in world Z still have the best things in life, but these are almost but not quite counterbalanced by some bads, and that’s how their welfare levels come out to only +3.
Anyway, the view posits a highly implausible threshold effect. Parfit supposes that there can be some experience that is only slightly less enjoyable than listening to Mozart — say, listening to Haydn — and yet it is infinitely less valuable to the world. There’s no reason why this should be true. (You can avoid the implausible threshold by rejecting transitivity; more on that below.)
The view is also crazily anti-egalitarian. E.g., enabling one person to hear Mozart for an hour would be more important than providing food, shelter, and basic medical care to millions.
3.6. Intransitivity
Lastly, we could avoid the RC by denying transitivity of “better than” (per Stuart Rachels). Here are two arguments for transitivity.
First, the Money Pump Argument. Suppose you have intransitive preferences; you prefer A over B over C over A. You presently have C. I have A and B. I offer you the chance to trade C plus a penny for B. Since you prefer B over C, you accept. Then I offer to let you trade B plus a penny for A. Again, you accept. Then I let you trade A plus a penny for C. You accept. Now you’re back to what you started with, only with less money. And the cycle repeats.
This is supposed to show that rational preferences must be transitive. From there, you’re supposed to infer that better-than is transitive.
Second, the Dominance Argument. Here’s another ethical axiom:
The Dominance Principle: If x1 is better than y1, and x2 is better than y2, then the combination (x1+x2) is better than the combination (y1+y2).
Now imagine there are three goods, A, B, and C, where A is better than B, which is better than C, which is better than A. By Dominance (applied twice), A+B+C > B+C+A, because A is better than B, B is better than C, and C is better than A. However, (A+B+C) is identical to (B+C+A), so this can’t be. Hence, it’s impossible that A>B>C>A.
4. Where Intuition Goes Wrong
It’s pretty obvious where our intuition goes wrong. When people hear about worlds A and Z, they consult their emotional reactions to decide which one is better. Since they have a stronger positive emotional reaction to A than Z, they conclude that A is better. If they’re trained in philosophy, they start making up convoluted, ad hoc theories to rationalize it.
And why is our emotional reaction to A stronger than Z? Partly because we imagine ourselves living in A, and we imagine being happier there. And also partly because human beings can’t intuitively grasp large numbers.
This is illustrated by a joke I once heard. A scientist giving a lecture mentions that the Earth is going to be swallowed by the sun in 8 billion years. A man in the audience becomes very agitated at this news. “Oh my god! What are we going to do?” The speaker says, “Don’t worry, it won’t happen for another 8 billion years.” “Whew,” the man replies with relief. “I thought you said 8 million years!”
Back in ancient times, when I was an intern at Cato, an esteemed scholar argued against utilitarianism by saying: "It tells us we should allow the Holocaust to happen for a net benefit of a dime!" Tactless whippensnapper me immediately said something like: "Come on, that's just a verbal trick that hopes we ignore the meaning of 'net'. It expects us to feel the huge wrongness of preferring a dime (gross) over stopping the Holocaust, when what we're really talking about is having to choose between two Holocausts. Why wouldn't we choose the one that's slightly less bad?"
For similar reasons, I distrust people's intuitions (even my own!) about "lives barely worth living". I think we use that expression and similar ones in natural language to describe lives that actually aren't worth living. (But we don't say that, because it's taboo.)
I'm far from an antinatalist, but I think Benatar is partially correct, to the extent that there are many more net negative lives than we ordinarily admit, or, to put it another way, if we were to observe a bunch of lives that are *actually* barely worth living, this wouldn't strike our intuitions as repugnant at all.
For the counter to the alternative at 3.3, I'd simply invoke the asymmetry between pleasure and pain (the weak version, not the one that commits you to anti-natalism).
Intuitively, the absence of pain is good even when that absence means the being having not existed. But the absence of pleasure is not bad when that absence means the being having not existed, because then there is no one to begin with that'll suffer from not being brought into existence. It's not a wrong committed on anyone because there isn't anyone.
To make it more concrete, one would see as a moral wrong to bring into existence someone that will lead a miserable life, in a way one wouldn't towards refraining from bringing into existence someone that will lead an amazing life.
So, A is better than Z because it's better for everyone in A. As for the additional people not present in A, not existing is not a moral wrong relative to happily existing.
But in the objection Z* where people in A have slightly better lives, and a high enough number of additional people who lead torturous lives are introduced, Z* isn't preferable to A. Because with regards to pain, not existing IS actually a moral good relative to miserably existing.