Discussion about this post

User's avatar
Wallet's avatar

I'm in agreement that global debunking arguments aren't particularly strong, but they don't have to be in order to debunk moral realism, IMO. They just have to be stronger than the arguments for moral realism, and I don't think those arguments are all that strong.

So, for example, what is the moral realists' theory of how we came to have our moral views, when they are accurate to the objective facts? It's something like: We developed this ability to reason, the ability to reason somehow gets us moral intuitions (perhaps in the same way it gets us mathematical intuitions). We weigh these intuitions against each other (because they often seem to conflict) in order to figure out the moral facts.

My portrayal of the theory above is meant to highlight the two big problems for it. First, how does the ability to reason get us accurate moral intuitions? I get why it gets us accurate mathematical intuitions: we can see at a glance that the shortest line between two points is a straight line when we just consider the proposition on its own because our minds are fast at reasoning about particular things and the proposition makes sense by itself, which is why we can double check our immediate reasoning afterwards.

Yet, moral intuitions aren't like that. For all the fundamental moral intuitions, you can't just double check whether they are true by reasoning about them. Instead, they just seem true, irrespective of reasoning. They seem more like perceptions of the world (e.g. my pillow is navy blue) in that regard. I can't reason my way to "my pillow is blue", I can just look and see that it is blue. Yet, how is it possible to just look and see that "stealing is wrong"? There's no apparent mechanism even when we try looking for one (there is no moral equivalent of photons or photoreceptor cells). This should undermine our trust in moral intuitions to a large extent.

Second, if intuitions are just a result of reasoning, why do they conflict so often? Why do we have to weigh them against one another? This doesn't seem to be true of mathematical intuitions (except in the sort of fringe cases mathematicians debate about, maybe). There's a strong case, even if you are a moral intuitionist and moral realist, that most of your moral intuitions are mistaken (i.e. that moral intuitions are wrong more often than they are right). It seems like our moral intuitions are not mostly a product of reasoning (even of the immediate sort of reasoning I discussed with mathematical intuitions above), but must be distorted by other factors away from the truth.

Presumably, the moral realist will say this is because of particular debunking factors: our selfishness debunks this intuition, our shortsightedness debunks this one, and so on. Yet, this is just to admit two things. First, it admits that many (if not most, as I suggested above) of our moral intuitions are the product of biases, not reasoning. Yet, if so, then there's not as large a step from "many/most of our moral intuitions are debunked" to "all of our moral intuitions are debunked" as there would be if the vast majority of our intuitions were trustworthy (i.e. you need much less justification to make the jump now). Second, by admitting that so many of our intuitions are not a product of reasoning but instead a result of debunking factors, it gives you a strong inductive reason to think they all are debunked (and so gives you some justification for making the jump).

These two problems alone don't defeat the moral realists' theory of how moral intuitions can (sometimes) come to reflect objective moral facts (i.e. they don't actually justify jumping to the conclusion that all moral intuitions are debunked), but they make the moral realists' theory pretty weak and thus make it much easier for even a relatively-weak alternative (e.g. the universal debunking argument in this post) to come along and defeat it. The alternative just has to be slightly better than the realist option.

Disclaimer: I know that moral realists present other important arguments for their views, but I do think that the ones based on moral intuitions are the strongest (e.g. the Moorean argument), and so the problems above will (I think) be pretty big problems for all of the best arguments for moral realism. That's because they provide reasons to think our moral intuitions specifically are untrustworthy.

Expand full comment
David Pinsof's avatar

I think this is generally a straw man attack on adaptationist theories of morality. For actual adaptationist theories of morality, I’d recommend checking out Oliver Curry’s work on morality as cooperation (he might actually agree with you on moral realism), Baumard’s work on mutualistic morality, DeScioli and Kurzban’s work on dynamic coordination theory, Pat Barclay’s work on social markets (which explains our judginess and virtue signaling), and my own paper “the evolution of social paradoxes.” Also, even if you don’t end up buying any of these approaches (or any combination of them), what’s your alternative? That we magically intuit correct moral truths just because? At least adaptationists are trying to come up with a good theory. You don’t even have a theory. Obviously morality had to come from evolution in some way, whether biological evolution or cultural evolution or some combination. And these evolutionary processes aren’t designed to track moral truth. So what’s your explanations for how some people’s moral intuitions (I’m assuming you mean “your own”) happened to converge on the moral truth? Where did this moral truth faculty come from, if not from evolution? From god? I’m actually not even a moral antirealist, but I do think you’re being overly dismissive of their arguments.

Expand full comment
40 more comments...

No posts