19 Comments

I think the argument against Anscombe is unfair.

Consider this: to knowingly believe something false is crazy.

And consider what you said in your foundationalism post: "Do you think you have at least one false belief? If you’re rational, you answered “yes”. It follows that your beliefs are inconsistent, since they can’t all be true."

Mirroring your argument, we can then conclude that all rational people are crazy. They know that at least one of their beliefs is false (they just don’t know which one).

There is a difference between knowingly believing something false and knowing that something you believe is false. The former is narrower than the latter, and it is only the former that’s crazy.

Similarly, there’s a difference between knowingly punishing an innocent person and knowing that you have punished some innocent person. The former is narrower than the latter, and it is only the former that’s unjust.

Expand full comment

Having only a lay familiarity with the concept of lexical priority, does it have to be formulated so that "some kind of reason that is infinitely more important than another kind"?

I've always thought that the core intuition is that some kinds of morally relevant actions might be qualitatively different from others, in order to avoid undesirable utilitarian conclusions such as "there is some sufficiently large number N for which it's the case that N people getting a speck of dust caught in their eye is morally worse than one person being tortured to death (if you have to choose one or the other)". We want to say that death by torture is qualitatively different from eye-dust in such a way that this claim is false, perhaps in a similar sense to how if I ask "which is larger, 1 or the letter A?" the question doesn't make sense (or at least requires some guideline for how to compare "sizes" of numbers and letters).

It may be the case that both torture and eye-dust are morally bad, but that there isn't an obvious way to perform the utility-accounting calculation that you would desire for either-or hypotheticals (admittedly you could carry this to its logical conclusion and claim that no two distinct acts are in the same lexical class, and then your utilitarianism can't do much beyond showing that 2 murders is worse than 1 murder, etc. But I don't think it's necessary to commit to that extension).

Expand full comment

"That seems wrong; the permissibility of some behavior can’t depend on how we individuate actions."

Oh you sweet summer child...

Almost everything depends on how things are individuated. With respect to rights, you can't have impossible obligations, so degree of rights is proportional to the ability of society. The ideal is unachievable always. The optimum is just using Gantt chart criteria to individuate things on a computational basis then solving them as a knapsack problem with a dynamically sized sack. Boom done. Solved morality. Nobel prize please.

Expand full comment

Yup, 100% agree. As an aside, the repugnant conclusion argument is just a clever manipulation of our social desierability bias.

I mean on it's own there is nothing troubling about it. I mean no one has a problem with the fact that a huge number of moderately high utility lives can be more desierable than a few really really large utility lives. Somehow the apparent problem is coming from how we imagine very slightly positive utility lives. And that should be a red flag since the pure theory tells us we are picking the 0 point *so that* these things work out.

So something else is affecting our judgements here and it's where we are willing to say a life is barely worth living. For social desierability reasons we aren't willing to say that the world would be better off without someone's life when they don't want to be without it. It feels too much like endorsing the extermination of such ppl (even tho that's a completely different calculation) or being harsh to them. But ofc we should expect that evolution has made us very attached to continued existence even when a life has more bad than good in it.

It's basically the same problem one runs into when discussing whether you should abort fetuses who suffer from certain genetic conditions (tho actually things like deaf children of deaf parents are bad examples since there community likely matters more to utility than getting to hear). There is a different social meaning to saying that life isn't worth living than it has in the theory.

Expand full comment

This issue illustrates the strength of a virtue-ethics approach. It is wrong to knowingly punish an innocent person, not fundamentally because that consequence is qualitatively worse than whatever good the action might bring about, but fundamentally because knowingly punishing an innocent person is always (as Anscombe says) 'unjust'.

Expand full comment

I wonder if there are ways of understanding lexical priority that does not lead to any of these implausible conclusions about risk. For example, consider a lexical priority view on axiology. One might think that the value of any quantity of some lower good B can never outweigh the value of any quantity of some higher good A. So this view imposes a certain condition for properly ranking states of affairs by value, i.e. for any pair of states of affairs S1 and S2 whereby S1 has higher quantity of A and S2 has higher quantity of B, S1 should outrank S2 (assuming all else equal).

This is a kind of lexical priority view, but it does not seem that the view commits one to believing anything weird about probability. For example, one could believe that, even though S1 outranks S2, one should pursue a sufficiently high probability of S2 instead of a sufficiently low probability of S1. One doesn't need to say that there's some threshold probability p such that one can never perform an action that has at least p chance of sacrificing A goods for B goods. One can still say that, for any pair of A goods and B goods, even though the A goods are more valuable than the B goods, there are cases where one should pursue a high chance of the B goods instead of a low chance of the A goods.

From what I can tell, the only concern here is that there is no unified way of explaining both the relative value of different states of affairs and the probability thresholds where the higher valued state of affairs should be traded off against the lower valued state of affairs. This is because, if one did not have a lexical view, then one could just hold that the value of S1 is equal to N instances of S2, and thus an agent should pursue p probability of S2 instead of q probability of S1 if and only if p > Nq. While this result may be inelegant, I don't see it as particularly damning, unless there are some further implications that I'm missing.

In other words, it seems like there are two steps for figuring out how to promote value:

1. What is valuable, i.e. what is the correct ranking of states of affairs?

2. Given a correct ranking of states of affairs, and given some set of actions and their corresponding probabilities of instantiating various states of affairs, which action ought we take?

To avoid issues with risk, it seems that one could just adopt a lexical position for the first step, without doing so for the second step.

Expand full comment

I'm generally sympathetic to this argument and think that pointing to the issue of risk is a good way to undermine claims of lexical priority. But I didn't find your specific examples (the risk of punishing an innocent person, or the risk of killing an innocent person) convincing.

If I'm the sort of deontologist who thinks it's always wrong to kill or punish an innocent person, clearly what I think is that it's wrong to take an action that I myself believe to be the killing or punishing of an innocent person. If I myself reasonably believed the person wasn't innocent or that my action wouldn't kill anyone, but then I turned out to be mistaken, I haven't violated the principle that one must never punish or kill an innocent person. So doing actions that carry some *risk* of punishing or killing an innocent person are fine, so long as the risk is low enough that it's compatible with justified belief that this action won't punish or kill an innocent person.

You object to the sort of reasoning I just gave by pointing out that if I operate a normal justice system, I'm taking an action that I *know* will result in punishing some innocent people, since it's inevitable that some mistakes will be made--and thus this is still doing it "knowingly" and "intentionally". But there's a big difference between "knowingly adopting a policy that will result in punishing some innocent people by mistake as an undesirable side-effect" and "knowingly punishing an innocent person." What's forbidden, for the deontologist, is for there to be a person X you know/believe to be innocent, and then punishing X anyways. Adopting a policy that has a *risk* of punishing some innocent people is not doing that; it's simply not the same.

Since those examples just misunderstand the logic the deontologist operates by, I don't think they work well--you're effectively treating the deontologist as if they were a consequentialist and were thus committed to avoiding actions with the consequence of innocent people getting punished or killed, but that's not how the deontologist thinks.

Expand full comment

“You could have two actions, B1 and B2, which are each below the threshold, but the combined action of doing both B1 and B2 (“B1+B2”), is above the threshold. Suppose that each of B1, B2, and B1+B2 would produce a huge amount of good. Then you can do B1, and you can do B2, but you can’t do B1+B2.”

How would this work? As a general rule,

P(A and B) <= P(A)

because adding another condition can only decrease the total probability. So the the probability of B1+B2 will never be greater than B1 or B2 individually, and therefore will never be above the threshold if B1 and B2 are below it.

Expand full comment

"I see 3 answers you might give to this: i. Zero Risk Tolerance/ii. 100% Risk Tolerance/iii. The Threshold View"

That's because your whole analyses are based on arbitrary quantification and axiomatic treatment of the issues, a general problem with analytic philosophy.

There's another very pragmatic answer that socities and individuals have used for millenia: whether we take the risk to X depends on the circumstances and if there's a threshold, it's a fuzzy, "I'll decide when I see the actual problem in context", one.

There's no "one size fits all" solution, in politics, everyday morality, or even the legal system, nor can't or should be.

Expand full comment

When I talked to a Rawlsian professor he say the equal basic liberties just have to be "roughly equal" for the "second letter" clause to kick in. So I think he wants to use fuzzy logic to model it?

Expand full comment

Could we imagine a hybrid lexicality, where people sometimes decide to violate rights, but always feel obligated to compensate the victims appropriately? Or the version due to Gert, where exceptions to “the rules” are allowed if done publicly?

Expand full comment