Why I Am Not a Utilitarian
Utilitarians think you should always perform the action that produces the greatest total amount of benefit in the world. Usually, “benefit” is understood either in terms of pleasure/enjoyment (and the absence of pain), or in terms of desire-satisfaction (and absence of frustration). This sounds like a very nice view, and it is held by some nice and smart people (a surprising number, in fact, given the existence of the objections below).
What is wrong with this view?
1. Utilitarianism is counter-intuitive
When you first hear it, it sounds intuitive: How to decide what to do? Well, do the best thing. What’s best? The thing that produces the most benefit. What benefits us? Pleasure and/or desire-satisfaction.
But when you think about it more, it no longer seems so simple. Some famous examples in ethics:
a. Organ harvesting
Say you’re a surgeon. You have 5 patients who need organ transplants, plus 1 healthy patient who is compatible with the other 5. Should you murder the healthy patient so you can distribute his organs, thus saving 5 lives?
b. Framing the innocent
You’re the sheriff in a town where people are upset about a recent crime. If no one is punished, there will be riots. You can’t find the real criminal. Should you frame an innocent person, causing him to be unjustly punished, thus preventing the greater harm that would be caused by the riots?
c. Deathbed promise
On his death-bed, your best friend (who didn’t make a will) got you to promise that you would make sure his fortune went to his son. You can do this by telling government officials that this was his dying wish. Should you lie and say that his dying wish was for his fortune to go to charity, since this will do more good?
d. Sports match
A sports match is being televised to a very large number of people. You’ve discovered that a person has somehow gotten caught in some machine used for broadcasting, which is torturing him. To release him requires interrupting the broadcast, which will decrease the entertainment of a very large number of people, thus overall decreasing the total pleasure in the universe. Should you leave the person there until the match is over?
e. Cookie
You have a tasty cookie that will produce harmless pleasure with no other effects. You can give it to either serial killer Ted Bundy, or the saintly Mother Teresa. Bundy enjoys cookies slightly more than Teresa. Should you therefore give it to Bundy?
f. Sadistic pleasure
There is a large number of Nazis who would enjoy seeing an innocent Jewish person tortured – so many that their total pleasure would be greater than the victim’s suffering. Should you torture an innocent Jewish person so you can give pleasure to all these Nazis?
g. The Professor and the Serial Killer
Consider two people, A and B. A is a professor who gives away 50% of his modest income to charity each year, thereby saving several lives each year. However, A is highly intelligent and could have chosen to be a rich lawyer (assume he would not have to do anything very bad to do this), in which case he could have donated an additional $100,000 to highly effective charities each year. According to GiveWell, this would save about another 50 lives a year.
B, on the other hand, is an incompetent, poor janitor who could not have earned any more money than he is earning. Due to his incompetence, he could not have given any more money to charity than he is giving. Also, B is a serial murderer who kills around 20 people every year for fun.
Which person is morally worse? According to utilitarianism, A is behaving vastly worse than B, because failing to save lives is just as wrong as actively killing, and B is only killing 20 people each year, while A is failing to save 50 people.
h. Excess altruism
John has a tasty cookie, which he can either eat or give to Sue. John knows that he likes cookies slightly more than Sue, so he would get slightly more pleasure out of it. Nevertheless, he altruistically gives the cookie to Sue. According to utilitarianism, this is immoral.
2. The Utilitarian’s Dilemma
A common reaction for utilitarians is to “bite the bullet” on each of these examples, i.e., embrace the counterintuitive consequences. Why isn’t this a good response?
It’s not good because utilitarianism, like all ethical theories, rests on ethical intuitions. The utilitarian faces a dilemma:
a) If you don’t accept ethical intuition as a source of justified belief, then you have no reason for thinking that enjoyment is better than suffering, that satisfying desires is better than frustrating them, that we should produce more good rather than less, or that we should care about anyone other than ourselves.
b) If you do accept ethical intuition, then at least prima facie, you should accept each of the above examples as counter-examples to utilitarianism. Since there are so many counter-examples, and the intuitions about these examples are strong and widespread, it’s hard to see how utilitarianism could be justified overall.
So either way, you shouldn’t believe utilitarianism.
Aside: Suppose you think there is some other way of gaining ethical knowledge. E.g., you endorse an ethical naturalist view that claims that ethical theories can be justified like scientific theories. Or you’re a cultural relativist who thinks we just need to observe the social conventions. Or you embrace one of the other confused derivations of ‘ought’ from ‘is’ that are out there. Then the problem is that all of these approaches support values other than simple utilitarianism, if they work at all. (See Ethical Intuitionism, https://www.amazon.com/gp/product/0230573746/, for what’s wrong with these approaches.)
3. A third -lemma
The only way out is to argue that not all intuitions are equal: some are probative while others are not. The utilitarian needs to explain why the ethical intuition that we morally ought to care about others counts, but the intuitions about the examples in section 1 above don’t count.
Note: Why did I say “don’t count” rather than “count for less”? Because
i) There are so many strong and widespread intuitions that conflict with utilitarianism that if they even count for a little, you should probably reject utilitarianism overall.
ii) Also, there is an asymmetry between utilitarianism and other views: utilitarians think that enjoyment is good and that we have a moral reason to promote good (for any being that has interests). All other moral views agree with this. The difference is that utilitarians think that is the only moral reason we have, whereas other views think there are additional morally relevant considerations. Hence, to arrive at utilitarianism, you have to first embrace the intuitions common to all moral theories, then reject any other apparent moral reasons. Note the asymmetry: non-utilitarians do not reject utilitarian moral reasons. This is why the utilitarian must reject all but a very few intuitions.
So how might one justify this?
a. Maybe general, abstract intuitions are better than concrete intuitions about particular cases.
Problem: It’s not obvious that utilitarian intuitions are any more abstract or general than non-utilitarian intuitions. E.g., imagine a case of a very selfish person causing harm to others, and you’ll get the intuition that this is wrong. Talk about the Shallow Pond example, or the Trolley Problem. It’s about equally plausible to say that core utilitarian claims rest on intuitions about cases like those as it is to make a similar claim about deontology.
You can also represent deontology as resting on abstract, general intuitions, e.g., that individuals have rights, that we have a duty to keep promises, etc. It’s about equally plausible to say deontology rests on general intuitions like these as to say the same of utilitarianism.
b. Maybe non-utilitarian intuitions are approximations to utilitarian results in normal circumstances.
I’ve heard something like this suggestion. I guess (?) the idea is that maybe on some deep level, we’re really utilitarians, and we have the intuitions cited in section 1 because those sorts of intuitions usually result in maximizing utility, in normal circumstances (e.g., usually killing healthy patients lowers total utility). We just get confused when someone describes a weird case in which the thing that usually lowers utility would raise it.
Responses:
i) Why is it more plausible to say we are subconscious utilitarians who easily get confused than to say that we are subconscious deontologists who don’t get so easily confused?
ii) Also, why is this more plausible than the ethical egoist’s hypothesis that we are really egoists deep down, and that our altruistic intuitions result from the fact that helping other people usually, in normal circumstances, redounds to your own benefit? Then we just get confused when someone raises an unusual case in which the thing that would normally help you doesn’t?
c. Maybe there are specific problems with each of the above intuitions.
This is the only approach that I would accept as a reasonable defense of utilitarianism. I.e., you look at each of the cases from section 1, and in each case you show a way in which that intuition leads to some sort of incoherence or paradox (see, e.g., https://philpapers.org/archive/HUEAPF.pdf), or you find specific evidence that the intuition is caused by some factor that we independently take to be unreliable at producing true beliefs (where this factor doesn’t cause standard utilitarian intuitions), or you can argue that these intuitions are produced by some feature of the cases that everyone agrees is morally irrelevant.
So that leaves some room open for a rational utilitarianism, but this would require a lot more work, so we don’t have time to investigate that approach here. But until someone successfully carries out that rather large project, we should default to deontology.