The Puzzle of Agent-Centered Evidence
Here’s a type of occurrence you sometimes hear about. Sue is scheduled to get on a plane to fly somewhere. Before the flight, she finds herself having a bad feeling about it. For no known reason, she feels strongly disinclined to get on the flight. She doesn’t go. Later, she learns that that flight crashed. If she had gotten on it as scheduled, she would have been killed.
That’s a fictional instance of a real genre of stories. For some real-life stories like this, see: https://listverse.com/2014/04/28/10-unnerving-premonitions-that-foretold-disaster/
What inferences, if any, should we draw from such an event?
I have an epistemological puzzle about this. Think about how Sue would react to this event, compared to how third parties would react. Sue is much more likely to conclude that there is precognition. If Sue ever has a bad feeling about a flight again (or about anything else), she will listen to that feeling. But when you hear about what happened to Sue, you are much more likely to say, “Oh, it’s just coincidence. Sometimes people have bad feelings, sometimes planes crash; every once in a while, the two types of events randomly coincide.” If you later have a bad feeling about a flight, you’ll probably get on it anyway. This is commonly understood to be the rational response.
This looks like an example of agent-centered evidence.
Background: In ethics, agent-centered norms direct us to value our own performance of certain actions differently from someone else’s performance of such actions. E.g., it is widely believed that you should not kill an innocent person, even if doing so would somehow prevent someone else from killing two other innocent people. So you have stronger reason to avoid you committing murder than to avoid other people committing murder.
Another example: if your child and a stranger are drowning, and you can only save one person, you should save your own child. That’s true even if the other person’s life is more objectively valuable (e.g., happier, more moral, has more life ahead of him). But a random third party wouldn’t have reason to save your child; they should save the more objectively valuable life.
Not everyone accepts this. Utilitarians deny that there are any agent-centered norms. So the utilitarian would counsel killing the innocent person in the first scenario, and saving the objectively most valuable child in the second. Utilitarianism is agent-neutral.
Maybe there are agent-centered epistemic norms, or agent-centered pieces of evidence. Evidence is agent-centered when the very same fact has different evidential significance for different subjects, even when both have the same degree of confidence in that fact and the same background information. The precognition case looks like it might be such a case: the evidence is that Sue had a premonition about the flight, and then the plane crashed. For Sue, that’s pretty strong evidence of precognition. We would completely understand Sue’s resolution to never get on a plane that she has a bad feeling about; this would not seem unreasonable at all. But for third parties, it’s not very convincing. Is it?
Is this just a straight case of agent-relativity of evidence? Why might the evidence be more rationally persuasive to Sue than it is to outside observers?
Here’s one explanation. When outside observers learn about it, they should think,
“This event is a biased sample from the class of stuff that happens. The reason I heard about this story is that something weird happened – if Sue had a premonition that was completely wrong, then the story wouldn’t get repeated and I wouldn’t have heard about it. Furthermore, since there have been billions of people in the world, I should initially expect that some things like this would have happened, even if there were no precognition or ESP.”
But when Sue herself experiences the event, she shouldn’t say that. To her, her own life is not a biased sample. If nothing happened, or the premonition was wrong, Sue would still know about it. It’s not as if she searched through lots of people’s lives looking for stories like this; she only has the one life that she had a chance to directly experience.
That seems to make sense. Two people can get “the same evidence” but by a different evidence-collection method, and of course that can affect the significance of the evidence. (I put “the same evidence” in quotes because you might say that the information about how the evidence was collected is part of your evidence, so it's not really the same.)
There is still something weird about this, though, because Sue knows how the situation looks to third parties, and they know how the situation looks to her. Both seemingly know the same facts. The third parties know that Sue’s experience is not a biased sample to her. She knows that her experience is, to other people, just the experience of one among the 7 billion people on earth, and not particularly remarkable to them.
According to the above explanation, the difference between Sue and the third party is this: Sue had the chance to experience exactly 1 person’s life, and that one life had a weird, precognition-like experience in it; third parties, on the other hand, had the chance to hear about any of millions or billions of people’s lives, and all they know is that at least one of those lives had a weird, precognition-like experience in it. (Assume that they are completely confident that the story happened as described – I don’t want to talk about the “asymmetry” that only Sue knows if she is lying.)
If that’s the difference, we should be able to eliminate the epistemological difference (shouldn’t we?) by ensuring that both parties get the same information: Let Sue know about how many other people there are in the world. Let her have the same statistical information (if any) as the third party has about how often people have precognition-like experiences. But it still seems as if Sue and the third party have different epistemic positions.
Another question: what about people who know Sue personally? If Sue is a stranger to you, you could say, “There are 7 billion strangers, so it’s not that remarkable that at least one of them had a precognition-like experience.” But if Sue is a member of your immediate family, you might say, “There are 4 members of my family, of which one had a precognition-like experience” – which sounds a lot more remarkable. I think Sue’s family would in fact be more impressed than strangers would be. But is this rational?
And if so, why couldn't we extend this to Sue's barrista at Starbucks? The barrista could say, "I have had only 100 customers today, of whom one had a precognition-like experience", which sounds fairly remarkable. You shouldn't expect 1 of 100 people to have had such a weird experience, unless ESP is real.
And by the way, here is another case of agent-centered evidence: Philippa and Judie both hear about the Trolley Problem. Judie has the intuition (as some people do) that turning the trolley is wrong. Philippa, let’s suppose, has no definite intuition about it. Both people can understand both sides – both can see why one might think it wrong to turn the trolley, and both can see why one might think it right (as is typical in this case). So it’s not as if one of them should find the other one’s reaction strange or crazy. It’s just that one of them finds the anti-turning intuition subjectively compelling, while the other one feels both intuitions with about equal force. Both report their reactions to each other, and both completely believe the other’s subjective report. Assume that neither knows any arguments for or against turning the trolley beyond the obvious.
What is most likely to happen is that Judie will believe that turning the trolley is wrong, while Philippa will withhold judgment. Is this rational, or is one of the two just being biased?