6 Comments
founding

When put like that, the case for evolutionary debunking is rather unpersuasive. I wonder if the real motivation for it is backwards: people already start assuming that there is no reliable way of knowing moral values, perhaps due to scientism, and therefore think moral convergence is explained by evolution.

Expand full comment

I haven't read it yet, but there's a book called The Demon in Democracy: Totalitarian Temptations in Free Societies, and reading this, it made me think of that book, because ironically, this is a very totalitarian stance being advocated for here.

Then again, liberalism as defined here is very vague and lots of very divergent systems could be considered liberal. I think the big flaw in liberalism as currently practiced is the lack of a transcendent ideal to aspire to, the notion that it's ok to simply lead a life of indulgence. But a liberal system could definitely have something like this.

Expand full comment

Morality is the evaluation of an impartial observer, the view from nowhere.

How can we discover what such a being would think? Are we converging on a moral truth, or a prudential, rationally self-interested one? How do we separate evidence of one from evidence of the other? What is the test of moral progress?

And what is at stake?

Stipulate moral realism. Differences of opinion and the need for dispute resolution will remain. This means we have to interpret these moral truths and apply them somehow, in spite of our differences, our imperfect understandings, and the rather ambiguous evidence we have. So how does our situation differ from people who don’t think moral realism is true, but want to cooperate and flourish purely on the basis of prudence? What means do we have that they lack? What errors will they make that we can avoid?

I think a good argument for moral realism is that we end up acting as if it were true whether or not we believe it. But it turns out to be irrelevant to our actual choices. Society is a massive (fairly unethical) experiment in how best to evoke cooperation from each other. It seems more likely that we generalize our beliefs about morality from the outcomes of that experiment, rather than being able to predict or guide the outcomes of the experiment from our pre-existing moral beliefs. Can we know what we ought to do without knowing what we *can* do? Can what we ought to do disregard what we want to do?

Maybe there are moral truths, but if we can only find them via unreliable means, all we have are moral conjectures.

Expand full comment