12 Comments

My friend Jay Moss wrote an email to Huemer about this paper:

I recently read your article "A Paradox for Weak Deontology," and I have some issues with the argument you present.

"Torture Transfer" is an interesting hypothetical. It raises some interesting questions about individuating actions, how the moral wrongness of an action depends on the broader "plan" that it's a part of, whether the fundamental unit of moral analysis should be actions vs plans, etc. but I don't think it's a problem with deontology per se. Anyway I would deny the 2nd and 3rd premise.

Adjustment 1 is permissible because it's permissible to violate someone's rights if (1) doing so is necessary to provide sufficient compensation to that person and (2) you actually plan on providing such compensation. For example, let's say that Mary is drowning in the ocean and the only way to rescue her is to knock her unconscious (since drowning victims tend to flail which endangers themselves and the rescuer). In this case, presumably most deontologists would agree that knocking Mary unconscious is permissible because (1) doing so is necessary to rescue her and (2) you actually plan on rescuing her.

Now, the one trick here is the phrase "is necessary". Typically, when we speak of one action being necessary for another action (such as the drowning person example), we're talking about causal necessity, i.e. action X is permissible if X is causally necessary for action Y, where Y sufficiently compensates the "victim". But in the Torture Transfer case, what I mean is moral necessity, i.e. action X is permissible if X is morally necessary for action Y, where Y sufficiently compensates the "victim". For example, let's say that in the drowning case, I can actually save Mary without knocking her unconscious.

However, for whatever reason, I can do this only if I drown 5 other innocent persons (e.g., I know that if I try to save Mary without knocking her unconscious, she will flail so much that I need to steal a flotation device used by 5 other people). Now, in this case, knocking Mary unconscious is not causally necessary to rescue her (i.e. I could just drown the 3 other innocent persons and save Mary without knocking her unconscious).

However, knocking her unconscious is morally necessary to rescue her (if I don't knock her out, then rescuing her [which involves drowning 3 innocents] would be impermissible). Thus, I'm justified in knocking Mary unconscious, because (1) doing so is morally necessary to rescue her (which I'm assuming is "sufficient compensation" for the knockout) and (2) I actually plan on rescuing her.

Adjustment 2 is permissible for similar reasons, except in the reverse. Adjustment 2 is permissible because it's permissible to violate someone's rights if (1) doing so (or at least having had a plan to do so) is necessary to make a previous action morally permissible, (2) the previous action has been performed, and (3) the previous action sufficiently compensates the "victim" for the current rights violation (the compensation came before the harm).

Unfortunately, I think the only hypotheticals that work for this are fairly similar to the Torture Transfer case. But I think the principles themselves make sense. Also I've been speaking of these as examples of permissible rights violations, but you might not classify an action as a rights violation if the "victim" is sufficiently compensated. Maybe, but it doesn't really change the argument much. Also, permitting rights violation so long as there is sufficient compensation isn't consequentialist, since I'm focusing on if the compensation is directed to the same person such that they are better off after the compensation.

Another way around this might be to adopt a kind of two-level deontology, similar to rule-consequentialism. E.g. under rule consequentialism, actions are not the fundamental unit of moral analysis. Rules are the fundamental unit of moral analysis. First, we talk about whether rules are good/bad (in a consequentialist sense). Then, actions would then be judged as right/wrong based on their accordance to the best rules.

You could apply similar reasoning to a two-level deontology. E.g. you might say that it's not actions that are the fundamental unit of moral analysis. Rather, first, we would talk about whether rules or plans or norms are right/wrong in some deontological sense (e.g., a plan might be wrong deontologically if it's execution on net results in the infringement of an individual's autonomy/freedom.

Note that this is not rule consequentialism, since it's not saying that a plan is permissible if it limit someone's autonomy/freedom so long as someone else's autonomy/freedom is promote; rather, it's saying that a plan that involves local limitations on someone's autonomy/freedom is permissible so long as that person's total freedom/autonomy is promoted).

Then, actions would then be judged as right/wrong based on their accordance to the best plans/norms/rules (as judged by the deontological theory). I'm not saying I accept this, but this is something I've thought about before and it doesn't strike me as implausible.

Either way, I think the spirit of the response here is that deontologists don't need to fetishize individual actions in a vacuum in the way that the hypothetical suggests. I don't think there's a good reason to believe that our moral assessments of an individual's actions shouldn't be sensitive to other actions made by an agent (even if those other actions have no causal relevance to the current action) or the broader plan that the agent is trying to carry out. I think any plausible moral theory is going to assess not just individual actions in a vacuum, but rather the collections of actions based on their relation to the broader plan/project that the agent takes to motivate the individual actions.

This is somewhat similar to Kant's point that we cannot judge acts in a vacuum. Rather we should judge the act and the maxim that the agent takes to justify the act. Perhaps a similar point can be said here: rather than just judging an action, we judge the act, the maxim, and/or perhaps the guiding project that motivates the act. And our judgment at each level of analysis can be sensitive to deontological considerations without falling prey to these kinds of hypotheticals.

Dr. Huemer replied:

Some things to think about:

(a) Say we have the principle:

It's permissible to violate A's prima facie rights if (i) one plans on providing adequate compensation, and (ii) the prima facie rights-violation is necessary to provide that compensation.

Is it necessary that A consent, or may one do so without consent? If A must consent, then stipulate in my scenario that A doesn't consent.

Suppose you say A need not consent. Is it permissible for someone (say, the government) to take your house without your consent and destroy it, provided that they later pay you adequate compensation? Assume the compensation exceeds the value of the house, yet you did not consent. This strikes me as impermissible.

You might say that taking the house wasn't necessary to providing the compensation, since the government could have given you money anyway. But suppose that the government gets the money that they're going to pay you with from Walmart, which is paying the government for the land that your neighborhood is on. So they wouldn't have the money unless they were able to deliver the land to Walmart.

(b) Suppose that I perform Adjustment 1 without intending to perform Adjustment 2. (Or perhaps, at the time I do Adjustment 1, I haven't yet decided whether to do Adjustment 2.) This, on your view, is impermissible.

Then, after I perform this wrongful action, I reconsider whether I should do Adjustment 2.

Would this be permissible? It seems that the answer is no, because Adjustment 1 was impermissible, and it will continue to have been impermissible regardless of whether I do Adjustment 2, because I did not have the required intention when I did Adjustment 1. It is counterintuitive that the two actions are now both impermissible.

(c) Suppose that I just performed Adjustment 1, and I can't remember what my intentions were at the time. Is it now permissible for me to do Adjustment 2? It is counter-intuitive that I must figure out what my intentions were in order to know whether I may do Adjustment 2.

(d) A more tangential but interesting point: The scenario is somewhat reminiscent of this version of the Organ Harvesting case discussed by Judith Thomson: Assume the five sick patients are all sick because the doctor deliberately infected them with diseases of different organs. Later, he realized that murder is wrong, so he wants to remedy his mistake. So he comes up with the plan to kill 1 healthy patient and transplant the organs to the five people he infected. Would this be permissible?

Thomson says no (as any deontologist will agree). But note that if he doesn't kill the healthy patient, then the doctor will have murdered 5 people; if he kills the healthy patient, he will only have murdered 1 (since the other 5 will survive). And murdering 1 is surely less wrong than murdering 5. So why shouldn't he kill the healthy patient?

I think this might be relevant, because it seems to suggest that you can't really make up for a rights-violation, or redeem a previous wrong, by committing another rights-violation (even if the latter would compensate the victims of the previous action). At least, that's one way of interpreting the lesson of the example.

Expand full comment
founding
Apr 8, 2023·edited Apr 9, 2023Liked by Michael Huemer

Thinking about it, in both situations you are describing, available actions don't seem to cause proscribed harms. In the torture case, one's goal is to reduce the net amount of pain caused, and someone experiencing more pain is an unfortunate side-effect. The bank hack case is similar.

Compare with the trolley problem. In the regular trolley problem, I think it's permissible to switch tracks, and the resulting death is merely collateral damage. However, my intuitions say that if you had to actively push someone onto the tracks to stop the train, that would be impermissible.

Perhaps the difference between those is whether someone's suffering is an accidental effect, or an instrumental part of your goal. Likewise, in the torture case, if only one person was connected to the machine, flipping the switch would have no downsides.

I'm not sure this actually a consistent distinction, but let's suppose so. Then, maybe a thought experiment involving obviously proscribed harms would give us more insight into the problem.

I struggle to think of one though: the generalized example you gave almost seems to rule cases like Organ Harvesting out.

EDIT: re-reading this post, I notice your stipulation about the harm in the torture case being instrumental to increasing the overall welfare. I still think it does not make it a proscribed harm, though.

Consider: normally, murdering another person to save one's life is impermissible. However, if someone credibly threatens to kill you unless you kill another person, I feel like you're at least excused in doing so. The adjusted Torture Transfer case is similar: it's as if a malevolent force pre-arranging this situation for you makes it morally responsible for rights violations caused.

Expand full comment
Apr 10, 2023·edited Apr 10, 2023

I don't really see the puzzle. If your actions would reduce both individuals' suffering or increase both of their bank accounts, then presumably both individuals would consent. It's okay to harm people with their consent, and if you can't directly ask for consent then you can act based on the probability of consent. Since they probably consent, it's okay to pull the switches.

If you ask them and one or the other doesn't consent, then it's not okay to pull the switch and increase the non-consenting person's suffering or to hack their bank account. This seems intuitively obvious in the bank account case (No, you cannot hack my bank account, even if it will result in me having an extra $20), and insofar as it seems unintuitive in the torture case it's just because it's hard to imagine a scenario where someone doesn't consent.

Consent makes all the difference here, in my view. This is why it would be wrong to pull one switch when the other is already pulled unless you get explicit consent from the person who would suffer as a result (without explicit consent, you should presume that they do not consent as there is no benefit to them and most people won't consent to extra torture without any benefit to them). That's what makes your cases relevantly different from other cases where you violate one person's rights to benefit another person.

Why would it be wrong to make people better off without their consent? Well, because it violates their rights, obviously. If you increase the suffering of someone without their consent, then you are violating their rights. In general, I am not allowed to violate rights to make you better off unless those rights are defeated. Presumably the lowered suffering of the other person isn't enough to defeat the rights of the person you would be harming. (If it was enough, then there would be no puzzle, since the deontologist would be justified in pulling each switch individually.)

Expand full comment

If one is taking an action with the intention of immediately negating its bad effects, then it seems somewhat meaningless to break into two actions and separately consider the morality of each.

A heart transplant could be considered the two-step action of removing a heart, then replacing it by another; but no deontologist would be opposed to a heart transplant because the first step of removing someone's heart is bad. The whole transplant is clearly all one action.

Expand full comment

Good to see deontologists theorizing! I've heard of another deontologist, Eyal Zamir, who also is interested in theorizing about ethics, but in general theorizing seems to be a pretty unpopular activity among deontological philosophers.

Expand full comment

I don't know if anyone's said this yet, but there is an important piece of the puzzle is missing. Remember, what matters in deontology (at least for Kant) is your maxim, or your intention. However, the hypothetical neglects this, almost analyzing the actions in a pseudo-consequentialist way.

The hypothetical argues that there are two modes of analyzing the actions: two individual actions or one big action, and this is supposed to yield different results, because one individual action will treat one person as a mere means. However, such a distinction doesn't occur when we take into account the actor's maxim. What matters is if you INTEND to use someone as a mere means. So if you flip S1, WITHOUT the intention of flipping S2, you are treating P2 as a mere means, and this is wrong. But if you flip S1 with the intention of then flipping S2, you do not intend to use anyone as a mere means, and you have done nothing wrong. Note that if you flip S1 with the intention of flipping S2, but you are blocked from flipping S2, you still have done nothing wrong, because you acted by a maxim that had unanimous consent of all involved parties. Here you can see we are able to individualize actions without influencing the ethical analysis. That can only occur under consequentialism. You might ask, what if we individualize maxims? Well, you can't really do that because you can't cut up intentions into smaller pieces. You would simply ask, "is this action informed by an intention to treat someone as a mere means?" You might say, "well, when your intention is to flip the first lever, it treats P1 as a mere means, which is wrong." This makes the same mistake because your action in flipping the lever is NOT informed my an intention to treat people as a mere means.

I prefer this analysis because it avoids what I consider false distinctions between "proscribed harm" and "side-effect harm" (I don't know what else to call this). Someone in the comments compared the classic trolley problem to throwing someone onto the track. In reality, both scenarios are the same: your intention involves causing the death of one person to save others. You are involving someone in a scheme of action to which they would not consent. In both scenarios, the now-killed people were previously uninvolved. We seem to instinctually make the false assessment that the guy on the tracks was already involved, by being on the tracks, and that makes his murder OK. But the logic doesn't follow. Both now-killed people were previously uninvolved in your actions. They were involved in the dilemma though, but this carries no moral significance. What matters is that they wouldn't consent to your intention, because it would result in their death. In my mind, all harm that you can foresee is proscribed harm, while only the harm that you did not predict is a side-effect. Note that the side-effects are unintentional, and thus don't matter to our ethical analysis.

I also saw someone comment that you can engage actions that harm others if you plan to compensate them. I disagree with this principle. What matters is if the involved parties would consent to your plan. If the parties you involve wouldn't consent to this compensation scheme, it is wrong. But if you do have consent (or consent is implicit; i.e. they would consent), it's fine. With the swimmer, even if the swimmer is resisting your attempt to knock her out, it's still okay because she would consent if she had all the relevant information (that the knockout is necessary to save her life). It's like lying for a surprise party. It's okay because the person lied to would consent if they knew it was to preserve the surprise. I'd note that this compensation principle doesn't apply to the hypothetical. In the hypothetical, both prisoners would consent to your intention, so there is no ethical issue. The second prisoner is not "compensated" by flipping S2 (after flipping S1), rather his consent was already implicit.

In sum, what matters is what your full intentions are. If all involved parties would consent to your intention, then all actions based on that intention are morally permissible under deontology.

Expand full comment

I remember an observation in one of John Searle's book on rational decision making (I think "Rationality in Action") and also in one of his lectures (available on Youtube) that when it comes to rational decision making, the description of choices is intensional, i.e. two logically equivalent descriptions of a set of alternative actions/choices are in fact not equivalent - may not be freely substituted for each other - from a decision making perspective. I'm not a professional philosopher so maybe I didn't understand Searle's or your point correctly but it seems to me the paradox goes away when if we apply this idea to it.

Expand full comment

This seems well-reasoned to me. But are you, in the end, suggesting that the consequentialist should steal from one to give to the other/harm one to relieve the other? Is this not precisely why many people have reservations about consequentialism? There are countless examples in everyday life when we could harm one person to bring greater benefit to another. This paradox seems to me to challenge both deontology and consequentialism, illustrating one of the main advantages of virtue ethics.

Expand full comment

I'll reproduce my comment from the fb thread, slightly edited:

I think that the obvious solution is right. But I don't share your intuition about the final case. Imagine the original story, except that only S1 was present (no second switch). It's not obvious to me that it's permissible to switch S1. But that strikes me as basically the same scenario as if there were two switches, then an earthquake flipped S2, and S2 was no longer functional, leaving only S1 (imagine the earthquake causes S2 to break apart, and in the midst of the destruction, the switch was flipped). So the residual problem you identify is, I think, not a genuine problem for deontological ethics.

Expand full comment
Apr 10, 2023·edited Apr 10, 2023

While a deontological view would be that harming somebody is immoral, that doesn't mean harming somebody cannot be justified in a moral dilemma in which harming somebody is the lessor of evils. The flaw in people's thinking is to believe there is always a moral option when a moral dilemma would mean there are no moral options but rather a choice between the lessor of evils. A good example of this is the classic trolley problem. Each choice results in a death of at least one person which means both options are immoral but one option is the lessor of evils. A deontological pluralistic view would take into account moral dilemmas in which consequentialism would apply. If a person claims that harming somebody can be moral, then they are suggesting that morality is subjective because it is suggesting that mere opinion determines what is moral or immoral. If morality is subjective then anything can be moral based solely on mere opinion. In that sense, there would be no hold on truth and we would be left with the harsh doctrine that might makes right.

Expand full comment

"Consequentialism is the ethical system of leaders, deontology is the ethical system of followers. Deontology requires understanding which actions are good and bad. Consequentialism requires understanding which world lines result from defining things as good and bad." -Joscha Bach.

Thinking too hard about Deontological ethics is a waste of time. It's like thinking about whether it's intrinsically better to drive on the left side of the road or right. It's clearly just a social contract. Society works better when we all agree to a side, it doesn't really matter which one.

Rights, justice, fairness, etc are not real quantities you can measure, they are fictions we agree on as a society for a purpose.

It seems like Michael needs to meditate or try acid or something. Just because you can name something or make a category in your head, doesn't mean it's 'real'. There shouldn't be anything suprising that two actions that you see as immoral can combine to make a moral set unless you believe that the 'goodness' or 'badness' of the choice exists somewhere outside of your own brain.

Expand full comment

My friend Jackson asks, “why doesn’t Huemer just bite the bullet that two wrongs can make a right in these very unusual and specific circumstances, rather than capitulating to utilitarianism, which is implausible in general?”

Expand full comment