Here, I explain how ethical intuitionism could lead to revisionary ethical views.
[*Based on: “Revisionary Intuitionism,” Social Philosophy & Policy 25 (2008): 368-92.]
1. Traditional Intuitionism
The basic ideas of ethical intuitionism:
There are objective evaluative truths.
E.g., it’s wrong to torture other beings just for fun. This is wrong regardless of our attitudes toward it, i.e., whether or not we approve of it, desire it, etc.
Our knowledge of them derives from ‘ethical intuitions’.
My view of intuitions: Intuitions are initial, intellectual appearances. I.e., mental states of something seeming to be a certain way as a result of intellectual reflection, but not as the conclusion of an argument. Ethical intuitions are simply intuitions that are about good, bad, right, or wrong.
Intuitions are related to ethical truths in roughly the way that sensory experiences are related to physical facts: the physical facts exist independent of our sensory experiences, but our sensory experiences tend to correspond with the physical facts and are our way of knowing about the latter.
Traditionally, intuitionism is associated with conventional ethical views; see W.D. Ross & H.A. Prichard. E.g., you should generally keep your promises, avoid lying, avoid harming others, help others when you can do so at modest cost, show gratitude for good deeds by others, give people what they deserve, etc. We don’t have to show that all these things promote utility, are implied in the notion of rational action, etc. It’s enough to say that all these things seem prima facie right, until we have specific reasons for doubting them.
2. Skeptical Arguments
All intuitionists nevertheless recognize the possibility of defeaters, i.e., information that casts doubt on the initial appearances. Here are ways that skeptics try to cast doubt on our ethical intuitions.
2.1. Conflicting Intuitions
Ethicists have found many conflicts, or at least tensions, in our ethical intuitions. E.g., most people intuit:
a. It’s seriously wrong to refuse to save a child drowning in a shallow pond (where you can save the child at little cost to yourself).
b. It’s not seriously wrong to fail to donate to poverty-relief charities (even though you could also save people at little cost to yourself).
c. The conspicuousness of a person’s misfortune is ethically irrelevant.
Philosophers such as Peter Singer and Peter Unger argue that the sole explanation for our differing intuitions about the Shallow Pond and Charity cases is that the drowning child is more “conspicuous” (because he’s closer to you, more visible, more directly affected by you). If you buy that, that creates a tension among the intuitions (a), (b), and (c).
(How do they make that argument? Basically by looking for other accounts of the difference between the cases and failing to find any plausible ones.)
If this sort of thing happens often, then we must conclude that ethical intuitions are unreliable.
2.2. Cultural Biases
People’s ethical intuitions appear to vary greatly depending on their culture. E.g., some traditional religious cultures (esp. Islamic societies and Christian societies until recently) had very negative views about homosexuality. Yet both our current society and the ancient Greeks accepted it (Plato even argues that homosexual love is better than heterosexual love). Many societies have practiced slavery, which our current society finds obviously unjust. Some societies, e.g., the ancient Romans, practiced infanticide of disabled infants, which we view as barbaric.
These observations about practices do not directly entail anything about intuitions, but it’s still pretty plausible that people in these different societies would have some pretty different ethical intuitions.
Again, if this happens a lot, then we should conclude that ethical intuitions are unreliable.
2.3. Biology
Some of our ethical intuitions (which are otherwise puzzling) seem to be well-explained by evolutionary psychology. E.g., we intuit that parents have special obligations to their own children, which they don’t have to other children or other people; that sexual promiscuity is good for a male but bad for a female; that we are far more important than other species; that incest is intrinsically wrong.
These judgments are all puzzling from the standpoint of the abstract ethical theories that philosophers tend to develop, but they can all be explained as adaptations that help promote our reproductive fitness.
You might think: “So what? Our eyes evolved by natural selection, but we don’t conclude that they are unreliable. If our moral sense evolved by natural selection, why would we conclude that it is unreliable?”
The answer is that the explanation for how our eyes contribute to our fitness adverts to the physical facts corresponding to our visual experiences; but the explanation for how our moral intuitions contribute to our fitness works independently of any moral facts corresponding to those intuitions.
E.g., when you explain why it’s adaptive to have visual experiences of tigers, that explanation will essentially rely on the fact that there are usually tigers around when you have those experiences (if there weren’t, then the experiences wouldn’t be adaptive). But when you explain why it’s adaptive to intuit that you’re obligated to feed your children, that explanation does not refer to the actual existence of the moral obligation; it only refers to the fact that feeding your children will result in more copies of your genes making it to the next generation.
This leads to the thought that you would have many of your ethical intuitions regardless of whether moral duties really existed. And this is said to be a defeater for those intuitions.
2.4. Personal Biases
People are often biased in their moral judgments by their own interests, personal feelings, etc. It’s plausible that these biases could alter your ethical intuitions, again making them unreliable.
3. Escaping Skepticism
The skeptics have way too broad a brush: they talk in terms of the broad category “intuitions”, without distinguishing different kinds of intuitions. This is sort of like the epistemological skeptic who uses the broad category “beliefs”, finds lots of examples of false beliefs, then concludes that you can’t trust “beliefs” in general … therefore, don’t believe anything. This skeptic lumps together, e.g., beliefs about space aliens with beliefs about chairs.
The obvious lesson from 2.1-2.4 above is that we should distinguish more and less reliable intuitions. E.g., the intuition that we’re specially obligated to help our offspring is open to suspicion as a biologically programmed bias; if you feel very emotional about abortion, then your judgment about that subject is likely unreliable; and if you think your culture’s practices are better than those of all other cultures, that’s also likely a bias. It doesn’t follow from any of that that the intuition that it’s wrong to torture people for fun is open to suspicion.
The two key points the skeptic overlooked:
a. The grounds for doubt listed in section 2 above apply to some intuitions a lot more than others.
b. Coherence is a source of justification. The biased intuitions are likely to fail to cohere with each other, or with the unbiased intuitions, yet our unbiased intuitions (if such there be) are likely to cohere with each other.
If a subset of our intuitions tend to cohere with each other, and we lack grounds for doubting those specific intuitions, then those intuitions are probably by and large correct.
4. Some Really Good Intuitions
Now I’m going to point to some ethical intuitions that look especially reliable. They are formal ethical intuitions. E.g.:
If x is better than y and y is better than z, then x is better than z. (Transitivity)
If x and y are descriptively identical in all relevant respects, then they are evaluatively identical. (Supervenience)
If it is permissible to do x and permissible to do y given that one does x, then it is permissible to do (x and y).
If it is wrong to do x and wrong to do y, then it is wrong to do (x and y).
Etc. These principles don’t by themselves entail an evaluation of any particular thing, but they place constraints on acceptable evaluative theories. In case you’re thinking these are uninteresting, I will note that they play key roles in some arguments for very controversial ethical conclusions (e.g., that equality has no intrinsic value, that orthodox deontological rights theories are false, or that the Repugnant Conclusion is true).
Needless to say (?), I’m not saying these are the only reliable ethical intuitions. They are, however, examples of especially reliable ethical intuitions. It’s totally implausible to say that they are culturally or biologically programmed biases, or biases created by self-interest or my personal emotions. These formal intuitions also cohere well with each other. So the arguments of section 2 cast no doubt on them. So they can reasonably be used to critique controversial ethical theories.
5. Revisionary Intuitionism, at last
Most intuitionists support common sense morality. But not all; Henry Sidgwick was a utilitarian. (Background assumption: Utilitarianism is a highly revisionary moral view.) You might initially find that odd. But after reflecting on all of the above, intuitionism is actually the most natural view for a utilitarian. It doesn’t inevitably lead to utilitarianism, of course, but it is more friendly to utilitarianism than any other meta-ethical view. Think about the other meta-ethical theories:
Cultural relativists should embrace the ethical norms of their own societies (as being “true for them”), which are generally going to be deontological, and certainly not utilitarian in any actual human society.
Subjectivists could theoretically hold any ethical views, but the overwhelming majority of people are going to have natural ethical attitudes that are pretty non-utilitarian, and they would have no reason to revise them. In the example in sec. 2.1 above, a subjectivist or relativist should say that (c) is obviously false. Obviously, conspicuousness is morally relevant, since conspicuousness affects our moral attitudes, and those attitudes just constitute the moral facts.
Expressivists have a similar situation to the subjectivists. As long as their moral attitudes are like those of nearly all normal people, it’ll be inappropriate (& insincere) for them to “assert” utilitarianism, since that doesn’t correspond to the moral emotions that they actually have.
Error theorists will hold that utilitarianism is false, like all ethical theories.
Ethical naturalists might be utilitarians, but this is mostly because they could hold any ethical views, i.e., their view predicts nothing. (See my “Naturalism and the Problem of Moral Knowledge,” Southern Journal of Philosophy 38 (2000): 575-97.)
How would utilitarian intuitionism work? Basically, the utilitarian would say
a. Some intuitions are much more reliable than others, esp. the sort of intuitions mentioned in sec. 4 above.
b. All of these most reliable intuitions cohere with utilitarianism. (Notice how that’s true of all the ones I listed. And the list could be extended.)
c. The leading alternative ethical theories all clash with one or more of these highly reliable intuitions.
d. The leading alternative ethical theories also directly fall under suspicion from the skeptical arguments of sec. 2 above, in a way that utilitarian intuitions do not. I.e., there are explanations of how those alternatives would be produced by cultural biases, evolution, etc., which do not also apply to utilitarianism.
That’s the most reasonable case for utilitarianism. (It’s much better than just saying “intuitions are bad” and biting the bullet on every objection, as some utilitarians do.) On some days, I feel sympathetic to that (before I remember things like the organ harvesting doctor).
Really excellent article--revisionary intuitionism should be adopted by more people. I've become very irritated by utilitarians treating intuitions as not probative and just biting the bullet on unintuitive counterexample. If anyone is interested, here is my defense of biting the bullet on the 10 counterexamples that Michael provides in his criticism of utilitarianism. https://benthams.substack.com/p/all-my-writings-on-utilitarianism
As for the organ harvesting doctor specifically, here's my defense of harvesting their organs. https://benthams.substack.com/p/opening-statement-for-the-organ-harvesting
Richard has a great article about this, showing that the organ harvesting example picks out various morally non-salient features, and a more fair test of our intuitions ends up being much less morally clear. https://rychappell.substack.com/p/ethically-alien-thought-experiments?utm_source=%2Fprofile%2F32790987-richard-y-chappell&utm_medium=reader2
I think Savulescu has an even better version of this. http://blog.practicalethics.ox.ac.uk/2013/10/winchester-lectures-kamms-trolleyology-and-is-there-a-morally-relevant-difference-between-killing-and-letting-die/
Our starting intuitions are clearly not utilitarian--at least in many cases. Utilitarians must do the hard work of revising them.
Utilitarianism has always felt to me like an appeal to a certain kind of ethical intuition - "more good things is good, less bad things is also good" is not particularly controversial, and the rest of the philosophy is just arguing about how to consistently apply that. It mostly struggles when it conflicts with intuitions, but I would say that in the majority of situations the utilitarian arguement makes an intuitive sense, even if it is likely to be unappealing in practice just because it obligates a lot of self-sacrifice (not something unique to Utilitarianism though!).