The Offsetting Puzzle
Moral offsetting is the practice of making up for a bad or prima facie wrongful action by doing something else that is good enough to outweigh the bad act. What should we think about this?
I don’t have a definite thesis about it, so I’m just going to ramble about how offsetting is puzzling and some things we might say about it.
1. Three Cases of Offsetting
(i) Climate
Say you think it’s bad to contribute to climate change, but you like to fly on airplanes. So, every time you fly, you donate money to plant some trees, which more than offsets your contribution to global warming, such that your net impact is to reduce global warming.
(ii) Meat
Assume that it is normally wrong to buy meat, perhaps for the obvious reasons discussed here and here. So, for each time you eat meat, you might decide to donate a certain amount of money to Vegan Outreach, sufficient to reduce animal cruelty by enough to outweigh the harm caused by your meat purchase. Does this make the meat purchase permissible?
If so, must the donation be to an animal charity? Could one, say, offset meat eating by donating to the Against Malaria Foundation, which has nothing to do with animal welfare but is nevertheless doing a lot of good?
(iii) Murder
Suppose you think that murder is wrong, but you really hate your neighbor. You decide to kill him but then donate a large amount of money to charity, so as to save many more lives than the one that you took. Does this make the murder permissible?
2. The Case for Offsetting
So here’s an argument. Let B be some bad action, let O be the offsetting action, let N be the “action” of doing neither B nor O, and let “x+y” be the action of doing both x and y.
N is permissible.
B+O is better than N.
(x)(y) If x is permissible, and y is better than x, then y is permissible.
Therefore, B+O is permissible.
This seemingly works for all three cases of offsetting above.
However, if you’re a deontologist, you might think that (3) is false; you might say that rightness is not so simply determined by facts about what is better and worse. (It can be wrong to save five lives at the cost of one life, even though in some sense saving five lives is better.)
The argument could be run with a variety of notions in place of “betterness” in (3). E.g., (3) could say:
If x is permissible, and y is morally preferable to x, then y is permissible. Or:
If x is permissible, and there is more moral reason to do y than to do x, then y is permissible.
One could then argue about whether B+O is “morally preferable” to N, or whether one “has more moral reason” to choose B+O.
3. The Case Against Offsetting
The murder case just seems completely unacceptable. It won’t be permissible to murder an innocent person, no matter how much you give to charity.
Perhaps this case differs from the other two cases (Climate and Meat) in that murder is a rights violation, while contributing to climate change and animal cruelty is bad but not a rights violation. It may be harder to offset a rights-violation than to offset an ordinary (non-rights-violating) bad action. Deontologists think that rights violations are wrong even when they produce significantly greater overall good.
However, sensible deontologists allow that some amount of good consequences can outweigh a rights violation, such that it would be permissible to kill an innocent person to save some number of other innocents. (On this, see my earlier post—
.) So suppose that saving a million lives outweighs taking one life. And suppose Elon Musk wants to murder his neighbor. So he murders the neighbor and donates $5 billion to lifesaving charities, thus saving over a million lives. Was the murder permissible?
It might depend on the time order. If he donates the money first and then says, “Okay, I’ve done so much good that now I’m entitled to murder someone,” that is clearly unacceptable. The previous good that he’s done is just completely irrelevant to the morality of the current action.
But suppose Elon plans to donate the money after the murder. His psychology is such that he knows that he will donate the money if and only if he completes the murder. Thus, killing the neighbor causes the million other people’s lives to be saved, via causing Elon to donate the money. Since we’ve assumed a moderate deontological view on which one can be justified in killing one person to save a million, you might argue that the murder must be justified.
This would be strange, though. It’s odd that the time ordering of the offsetting action would make such a crucial difference, such that it’s horribly wrong if you donate then kill, but okay if you kill then donate.
Anyway, it does not seem right that Elon can kill his neighbor in either case. Even if you accept that it can be okay to kill 1 person to save 1 million, it doesn’t seem okay in this particular case. If killing the one person is somehow necessary to save the million—say, you can make a lifesaving serum from the one person that will cure a million people of a deadly disease, but doing so will kill the one person—that seems a lot more acceptable than the case where you have just decided that you’re only going to save the million if you murder the one.
If offsetting fails in the Murder case, why might that be? Recall the argument for offsetting—
N is permissible.
B+O is better than N.
(x)(y) If x is permissible, and y is better than x, then y is permissible.
Therefore, B+O is permissible.
The most likely account of what goes wrong in the Murder case (as I’ve elaborated it here) would be, I guess, that (3) is false. In that case, maybe the offsetting argument also fails in the Meat and Climate cases.
4. For Carbon Offsetting
Regardless, I think the climate offsetting works. This case differs from the other two because in the Climate case, the victims of the bad action (mainly future generations) are the same people as the beneficiaries of the offsetting action. Also, the benefit and the harm are along the same dimension; there aren’t different kinds of goods. I think that makes it so that there is really no downside to the (B+O) behavior.
Matters are different for the Meat case, because the animals benefitted by donating to Vegan Outreach will be distinct animals from those harmed by your meat purchase. Similarly, when Elon donates $5 billion, the people he saves will be different from the person he kills. So there is an identifiable downside, even if it is outweighed. This makes it plausible to reject offsetting in Meat and Murder even if we allow it in Climate.
5. Individuating Actions
Some people would find the offsetting argument suspicious because of the way it talks about actions. It treats N as an action, whereas you might think N is just an absence of action. More importantly, it treats “B+O” as the name of another action, whereas you might think B and O are just two separate actions, with no single action that both are parts of. You might then think that we can block the puzzle about offsetting by refusing to talk about the moral status of “B+O” on the grounds that only actions may be deemed permissible or impermissible.
I don’t like this approach. I think facts about moral permissibility are objective, not dependent on conventions or conceptual schemes. The offsetting puzzle concerns whether certain behavior is objectively permissible or not. The above-suggested approach to the puzzle leans heavily on the notion of the correct way of individuating actions, so that requires there to be objective facts about the right way of individuating actions. But I don’t think there are such facts; I think the individuation of actions is conventional.
To explain more: Suppose you go to the store and buy a kiwi. How many actions did you thereby perform? You could call that one action, “getting a kiwi.” Or you could say there are two actions: (1) going to the store, and (2) buying the kiwi. Or there could be four actions: (1) going to the store, (2) picking up the kiwi from the produce aisle, (3) taking the kiwi to the checkout stand, (4) paying for the kiwi. Or you could consider each step on the way to the store to be a distinct action. Or you could divide a single step into multiple actions. Etc.
None of these ways of counting actions is either correct or incorrect. Some are more useful than others for particular purposes, but it’s not that one of them describes reality correctly and the others are false. (These aren’t even the right kind of things to be either true or false.)
Thus, in an ethical theory, the permissibility of your behavior should not be made to depend on how one individuates actions. A theory should not say, e.g., that if your behavior counted as three actions, then it was okay, but if it counted as four or more actions (with no other differences), then it was wrong.
A way to implement this constraint on ethical theories is to reject any ethical argument that turns on denying that some bit of behavior counts as “one action”. If the murder in the Murder case is wrong, we should be able to explain why that is so even if you treat B+O as an action.
6. From (B+O) to B
Here’s a reply that perhaps only an analytic philosopher would think of: Maybe the conclusion of the offsetting argument is correct, but it still doesn’t support the permissibility of the bad actions. I.e., the argument shows that (B+O) is permissible, but you can’t infer that B is permissible.
Formally, that appears to be an option. However, it’s just very hard to understand this. Say Elon tells you of his plan to kill his neighbor and donate $5 billion to charity. He asks you whether this would be okay. And imagine that you tell him:
Sure, it would be fine to kill your neighbor and donate to charity. However, don’t kill your neighbor.
That’s the English translation of “(B+O) is permissible but B is impermissible.” But that just sounds like nonsense.
Conclusion
The Murder case seems to provide a counterexample to premise (3) (that if x is permissible, and y is morally preferable to x, then y is permissible). However, it is also very puzzling that (3) should be false.
Another option would be to reject (1) (that doing nothing is permissible). Perhaps, e.g., if you’re in a position to save a million lives by donating $5 billion to charity, then you’re obligated to do so. But once you think about other examples (involving smaller wrongs that you could commit, and correspondingly smaller good deeds you could do to outweigh them), this route will probably lead to an extremely demanding morality for all of us.





Dustin Crummett and Rebecca Chan have a paper on this exact puzzle, coming to somewhat similar conclusions IIRC: https://philpapers.org/rec/CHAMIW
Subtle.