42 Comments
User's avatar
Wallet's avatar

I'm in agreement that global debunking arguments aren't particularly strong, but they don't have to be in order to debunk moral realism, IMO. They just have to be stronger than the arguments for moral realism, and I don't think those arguments are all that strong.

So, for example, what is the moral realists' theory of how we came to have our moral views, when they are accurate to the objective facts? It's something like: We developed this ability to reason, the ability to reason somehow gets us moral intuitions (perhaps in the same way it gets us mathematical intuitions). We weigh these intuitions against each other (because they often seem to conflict) in order to figure out the moral facts.

My portrayal of the theory above is meant to highlight the two big problems for it. First, how does the ability to reason get us accurate moral intuitions? I get why it gets us accurate mathematical intuitions: we can see at a glance that the shortest line between two points is a straight line when we just consider the proposition on its own because our minds are fast at reasoning about particular things and the proposition makes sense by itself, which is why we can double check our immediate reasoning afterwards.

Yet, moral intuitions aren't like that. For all the fundamental moral intuitions, you can't just double check whether they are true by reasoning about them. Instead, they just seem true, irrespective of reasoning. They seem more like perceptions of the world (e.g. my pillow is navy blue) in that regard. I can't reason my way to "my pillow is blue", I can just look and see that it is blue. Yet, how is it possible to just look and see that "stealing is wrong"? There's no apparent mechanism even when we try looking for one (there is no moral equivalent of photons or photoreceptor cells). This should undermine our trust in moral intuitions to a large extent.

Second, if intuitions are just a result of reasoning, why do they conflict so often? Why do we have to weigh them against one another? This doesn't seem to be true of mathematical intuitions (except in the sort of fringe cases mathematicians debate about, maybe). There's a strong case, even if you are a moral intuitionist and moral realist, that most of your moral intuitions are mistaken (i.e. that moral intuitions are wrong more often than they are right). It seems like our moral intuitions are not mostly a product of reasoning (even of the immediate sort of reasoning I discussed with mathematical intuitions above), but must be distorted by other factors away from the truth.

Presumably, the moral realist will say this is because of particular debunking factors: our selfishness debunks this intuition, our shortsightedness debunks this one, and so on. Yet, this is just to admit two things. First, it admits that many (if not most, as I suggested above) of our moral intuitions are the product of biases, not reasoning. Yet, if so, then there's not as large a step from "many/most of our moral intuitions are debunked" to "all of our moral intuitions are debunked" as there would be if the vast majority of our intuitions were trustworthy (i.e. you need much less justification to make the jump now). Second, by admitting that so many of our intuitions are not a product of reasoning but instead a result of debunking factors, it gives you a strong inductive reason to think they all are debunked (and so gives you some justification for making the jump).

These two problems alone don't defeat the moral realists' theory of how moral intuitions can (sometimes) come to reflect objective moral facts (i.e. they don't actually justify jumping to the conclusion that all moral intuitions are debunked), but they make the moral realists' theory pretty weak and thus make it much easier for even a relatively-weak alternative (e.g. the universal debunking argument in this post) to come along and defeat it. The alternative just has to be slightly better than the realist option.

Disclaimer: I know that moral realists present other important arguments for their views, but I do think that the ones based on moral intuitions are the strongest (e.g. the Moorean argument), and so the problems above will (I think) be pretty big problems for all of the best arguments for moral realism. That's because they provide reasons to think our moral intuitions specifically are untrustworthy.

Expand full comment
David Pinsof's avatar

I think this is generally a straw man attack on adaptationist theories of morality. For actual adaptationist theories of morality, I’d recommend checking out Oliver Curry’s work on morality as cooperation (he might actually agree with you on moral realism), Baumard’s work on mutualistic morality, DeScioli and Kurzban’s work on dynamic coordination theory, Pat Barclay’s work on social markets (which explains our judginess and virtue signaling), and my own paper “the evolution of social paradoxes.” Also, even if you don’t end up buying any of these approaches (or any combination of them), what’s your alternative? That we magically intuit correct moral truths just because? At least adaptationists are trying to come up with a good theory. You don’t even have a theory. Obviously morality had to come from evolution in some way, whether biological evolution or cultural evolution or some combination. And these evolutionary processes aren’t designed to track moral truth. So what’s your explanations for how some people’s moral intuitions (I’m assuming you mean “your own”) happened to converge on the moral truth? Where did this moral truth faculty come from, if not from evolution? From god? I’m actually not even a moral antirealist, but I do think you’re being overly dismissive of their arguments.

Expand full comment
Vladimir Vilimaitis's avatar

How do you reconcile your belief in adaptationist theory of morality with moral realism?

Expand full comment
David Pinsof's avatar

Briefly, I think of moral truth as whatever objectively activates our moral emotions under normal conditions. When we say x is morally wrong, what we mean is: x objectively fits the input criteria for anger, contempt, disgust, etc., such that anyone who doesn’t feel anger, contempt, or disgust about x is crazy, sociopathic, ignorant about x, or biased in some way (e.g. by self-interest or tribalism). Since our moral emotions are likely biased by all sorts of things, many of our moral judgments are likely to be false. But they at least *can* be true.

Expand full comment
Ṣalāḥ ad-Dīn Yūsuf ibn Ayyūb's avatar

Huemer does have his own theory and wrote a book about it

Expand full comment
DavesNotHere's avatar

Doesn’t that sort of apply motivated reasoning to stance independence? It identifies groups that have a different stance and a different set of intuitions, but somehow continues to insist that the facts do not depend on stance?

Though I admit, the distinction between objective and subjective seems hard to apply to genetic behavioral traits that nearly all humans share. Maybe it is objective but not universal and unchanging? Or there is a good reason to think one's genes are not part of one's stance, although in many cases stance would be determined by one's genes.

Expand full comment
Vladimir Vilimaitis's avatar

So, a sort of ideal observer theory with an evolutionary twist?

Expand full comment
David Pinsof's avatar

Yea more or less. Except the moral truth is what shapes the ideal observer’s judgments rather than the other way around. It’s an objectivist version of Jesse Prinz’s thesis in his book the emotional construction of morals.

Expand full comment
DavesNotHere's avatar

And intuitions change from time to time and place to place. If there is a moral reality they are being adapted to, they have yet to converge upon it.

Expand full comment
Nathan Smith's avatar

Moral realist here, so I'm not motivated to come to the defense of evolutionary reductionism of ethics. That said, I think you underestimate The evolutionary basis for altruism.

There are several evolutionary starting points for altruism.

First, kin selection. Sometimes sacrificing yourself for brothers or spouses or children or parents is the best thing you can do for your selfish genes. And since your selfish jeans are stuck in the Stone Age, they might presume that anyone you regularly deal with must be some sort of relative, as might typically have been the case then.

Second, reciprocity. If you have a propensity to return favors, people are more likely to do favors for you. Gratitude. And if someone is in a particularly desperate state, any favors done for them will be disproportionately valuable, and may therefore be expected to earn disproportionate gratitude if the recipients luck improves. Pity.

Third, demonstration of fitness. By doing favors for others, you show how prosperous you are, which might, in the Stone Age, have been very valuable in attracting friends, or mates.

I don't think evolution really predicts egoism at all. It predicts a complex array of instincts, frequently. Other regarding, and those predictions are often impressively successful. Nonetheless, there's a huge residuum of morality that doesn't lend itself to evolutionary explanation at all.

Moral behavior isn't reducible to instinct. Rather, instincts are resources that the moral will can put to use. It needs to know when to turn them on and off, when to yield to or encourage them, and when to to resist or override them.

Expand full comment
DavesNotHere's avatar

Mammals care for their young. Is caring for one's own children even a moral issue? Couldn't egoists desire to care for their own children? Does a theory of morality need to explain why people do this? It seems more like they need to explain when and why this needs to be avoided as too selfish. Only a cartoon version of egoism would demand that egoists should never care about what happens to other persons or do anything to benefit them.

Expand full comment
SolarxPvP's avatar

I think you're confusing a practical application of egoism with the demands of egoism more broadly. According to theoretical egoism, you really should ignore your children's interests in favor of your own - but that's only if you don't care about your children's interests. In reality most people do so it's not a practical problem

Expand full comment
DavesNotHere's avatar

Egoism broadly says do what you want, and it just happens to be the case that people often want to care for their children.

Expand full comment
DavesNotHere's avatar

That does not sound different from what I said. Was it intended as a clarification, embellishment, or criticism?

Expand full comment
SolarxPvP's avatar

I remember meaning it as a criticism. Maybe I did misread you? Then again I'm having trouble understanding what your comment is saying right now anyway.

Expand full comment
DavesNotHere's avatar

Moralists say care for your children because it is a moral obligation. Egoists say care for your children because you want to. Cartoon egoists say never care for your children because they are not you and you should only care about yourself and never give a fig about what happens to anyone else.

Biology tends to make mammals, including humans, want to take care of their offspring. A moralist should actually explain why someone ought *not* to care for their children in some cases in spite of wanting to, not why everyone is always obligated to care for their children. To the extent that lousy parents feel guilty about being lousy parents, this probably has much less to do with believing some moral theory than it has to do with wishing that they had done better, having failed. If they genuinely do not care @bout their children, the moral condemnation won’t make much difference.

Expand full comment
J. Goard's avatar

"Do what you want" is a conception of egoism that risks descending into tautology, unless you provide some qualification of the kind of 'want' that counts. In a sense, utilitarians "want" to maximize well-being in the universe.

It faces the further problem of not remotely tracking what natural language means by "egoist" and related terms. If I tell you, "Gary only ever looks out for his own interests", you're going to get a pretty good picture of the guy, and it won't be someone who prefers to donate 80% of his income to effective charities.

On a more natural, non-tautological sense of "egoism", expending resources on one's children is very much altruism -- just a form of altruism that's extremely easy to explain.

Expand full comment
DavesNotHere's avatar

Perhaps it is a tautology (although I disagree about utilitarians always wanting to maximize, except in a very abstract sense that does not guide action.) But the alternative lends itself to equivocation and false dichotomies. Arguments premised on the idea that one should only care about oneself are uninteresting, since they apply to so few persons. Using the negation of that as a premise adds nothing to an argument.

Expand full comment
J. Goard's avatar

On the contrary, while few people indeed would claim so as an explicit ethical stance, nearly all of us know at least a couple of people whose actions are almost entirely motivated by fairly direct benefits to their personal well-being -- and these are the people typically called "selfish" in natural language. This explains why many philosophers have suggested that egoism isn't even an ethical theory, but rather a rejection of ethics tout court.

Expand full comment
DavesNotHere's avatar

I must lead a sheltered life. That sounds wrong to me, unless the “almost “ is carrying more weight than it ought to, or a “personal benefit” can include making the cat purr.

Expand full comment
J. Goard's avatar

I'm not claiming it's the human norm, but I've known thousands of people, and a couple dozen have been "extremely selfish" in what I think is common language. And that common language certainly does not mean "doing things that they prefer to do".

Expand full comment
DavesNotHere's avatar

One can be extremely selfish and still care about some others, and take these others's wishes into account. We are discussing egoism, not extreme psychopathy.

Common language does not treat selfishness as doing things one doesn’t want to do, or doing things on a basis that takes what one wants as irrelevant. Perhaps selfishness means doing what one wants with very little regard for what others want.

Expand full comment
Robi Rahman's avatar

Your argument in sections 1 and 2 gets most of the predictions of the adaptationist view of morality wrong.

Expand full comment
DavesNotHere's avatar

“it would be a lucky coincidence if the moral beliefs that promoted reproductive fitness just happened to be the objectively correct beliefs.”

My problem is even understanding what it would mean for such beliefs to be objectively correct. I can understand something that is good for me, or for you, or for us, maybe for everybody. But just good simpliciter, without reference to anyone? Perhaps it means there is a reason for finding it good, which anyone that understands reasons would accept? But psychopaths sometimes can understand reasons, and reject anything relevant here. Or they are being irrational?

Good for everyone? But if this is objectively correct and stance independent, what are the criteria of goodness? The fact that I am stumped isn’t an argument, if I were arguing the other side, wouldn’t I need something too?

And how do we distinguish stance-independent reasons from stance-dependent reasons we all happen to have and so project onto all moral agents? Is it really impossible for an alien rational moral agent to lack some of our most obvious intuitions? To answer, wouldn't we need to know more than we do?

If we do not have certain knowledge, we should treat our intuitions as conjectures. Then there is not much at stake in the debate - moral realists need to criticize their views and seek improvement in a way that seems no different from subjectivists or relativists. They need to try to understand the causal effects of moral stances, and decide which ones suit their ends best.

Expand full comment
DavesNotHere's avatar

"it would be a lucky coincidence if the moral beliefs that promoted reproductive fitness just happened to be the objectively correct beliefs."

If that is the process by which the beliefs are determined, is it objective? What characteristics would determine whether a belief is correct, and not just some good enough evolutionary approximation?

Is stance sneaking in? We still need a standard by which to judge the moral beliefs as correct or incorrect. We can't use the evolutionary result itself as the standard, if it is to be objective. But the only standard it has to face is reproductive fitness. So even luck seems to be ruled out.

"One weakness of such a theory is that it requires complications that make it overly flexible. You have to explain such beliefs as

a) It’s wrong to harm others for your own benefit."

b) It’s obligatory to care for your children.

c) But it’s not obligatory to have children in the first place.

d) We should take account of the interests of other species, not just humans."

Don't other theories have to explain (or contradict) these also?

Some persons cannot care for their children. Is it still obligatory?

Some persons cannot have children. Still obligatory? Perhaps trying is obligatory, and success is supererogatory. Evolution seems to have made young men eager to engage in indiscriminate sex, but places a different burden on women. Perhaps their obligation is to maximize something other than quantity.

Why should it need to explain why we must take account of the interests of other species, not just humans? Is this particular intuition on such solid ground that no viable theory could criticize it?

"The fact that such a theory could be used to explain our moral beliefs in a way that doesn’t suppose their truth isn’t really a powerful undercutting defeater for those beliefs."

I agree, but it undercuts their objectivity.

Expand full comment
Concentrator's avatar

“But no one says this means we can’t trust our eyes, ears, memory, reasoning, etc.”

Yeah I think a typical everyday-level “debunking” reply would also include at least some amount of words aimed at knocking that premise down.

Expand full comment
DavesNotHere's avatar

Does moral realism make predictions? I can come up with some straw man versions, but not serious ones.

Will being good make us happier or more prosperous or increase the population? Are the ideal norms timeless, so that cave men and Roman centurions and modern real estate agents should all have obeyed the same ones?

Expand full comment
technosentience's avatar

I think I have an alternate evolutionary explanation of morality. However, I don't know if it's a debunking account:

1. Humans, as other species before them, have the evolved senses of pleasure and pain.

2. Humans are motivated by those senses to avoid pain and seek pleasure.

3. Humans have evolved a particularly developed sense of empathy to model other humans. It works through mirror neurons, producing the same response to others' feelings as to subject's own.

4. Thus, humans are also weakly motivated when they see others feel pleasure or pain, and infer that pain/pleasure are in general bad/good.

5. This is sufficient to explain a majority of moral intuitions.

The question here is whether the inference in 4 is sound. Strictly speaking, pleasure and pain would've evolved whether or not they gave someone agent-neutral reasons for actions so that those moral intuitions are true, agent-relative reasons so the generalization from empathy is incorrect, or no objective reasons for actions at all. But it may be that pleasure and pain necessarily provide objective reasons of some sort.

Expand full comment
David Pinsof's avatar

I think premise 4 is contestable, yes, but so is premise 3. Our empathy is not directed at “other humans” in general, but a very specific subset of humans whose fitness is interdependent with our own (e.g. family members, spouses, close friends and allies, etc.). For other humans (e.g., strangers, rivals, outgroups, enemies), we either feel no empathy or feel schadenfreude—the opposite of empathy—as countless historical and present examples attest.

Expand full comment
technosentience's avatar

I would disagree that people feel no empathy for strangers or even notional enemies – there are a lot of examples to the contrary. And it to the extent people don't feel empathy to their enemies, they also usually think hurting their enemies is morally right. So if anything, this confirms the account above.

Expand full comment
David Pinsof's avatar

No, the above account cannot explain schadenfreude (which I’m assuming you agree exists), and it cannot explain our relative lack of empathy for those not close to us (there may be some, sure, but it’s certainly far lower than it is for our loved ones). And even if we think genocide or whatever is morally right, that does not explain schadenfreude; one can inflict harm on others reluctantly and painfully while acknowledging that it must be done to prevent an even greater harm. Genocides and other acts of mass violence are not enacted with this kind of painful reluctance. They’re often enacted gleefully. Besides, it is very evolutionarily implausible that we would have evolved a general empathy towards everyone, as opposed to a selective empathy calibrated by fitness interdependence.

Expand full comment
technosentience's avatar

I agree that it does not explain schadenfreude or sadism, and other explanations are needed for them. And I agree that human empathy is selective, though I disagree it's as selective as you frame it – e.g. most people feel empathy for at least some animals. The theory predicts that the stronger empathy we feel towards someone, the stronger our feelings of moral obligation will be too: as people usually think there are greater moral obligations towards your family than towards strangers, this is a successful prediction.

But it's not implausible that evolution would evolve general empathy. Empathy evolved primarily to model other members of your own species, friends or enemies. And, empirically, it reuses parts of neural circuitry that are used for the subject's own senses, presumably to save up on caloric expenses. So it's actually not surprising we feel some empathy towards strangers or even animals.

Expand full comment
David Pinsof's avatar

I think it’s plausible we have a kind of cognitive or perceptual empathy for everyone, implicitly taking their perspective and reusing neural circuitry we would otherwise use for ourselves, yes. But I think it’s less plausible that it extends to emotional empathy where we feel their pain as our own in a way that compels us to reduce it. And even the empathy we feel for animals is selective—it’s more directed at “cute” animals that mimic the neotenous features of children than uglier (or dirtier, or more alien-looking) animals.

Expand full comment
technosentience's avatar

Not that we literally feel the pain as our own or are compelled by it to act, but we gain enough insight into someone to understand their motivations, and to possibly motivate ourselves too (for example, normally, most people are disturbed by seeing other people in pain). I think it is sufficient for the inference in premise 4.

Expand full comment
DavesNotHere's avatar

Apes also have mirror neurons. Do they have moral intuitions?

Expand full comment
Mark Young's avatar

Capuchin monkey experiences outrage about unfairness:

https://www.youtube.com/watch?v=meiU6TxysCg

(That's a link to the youtube video v=meiU6TxysCg -- in case it doesn't show up.)

Expand full comment
technosentience's avatar

Perhaps! You may see some behaviors resembling human in apes.

Expand full comment