Here, I defend agent-centered norms in epistemology.*
[* From: “Epistemological Egoism and Agent-Centered Norms,” in Evidentialism and its Discontents, ed. Trent Dougherty (Oxford University Press, 2011), pp. 17-33. ]
1. Agent-Centered Norms
Most people in ethics believe in agent-centered norms. These are norms that direct agents to value a situation involving themselves differently from an otherwise identical situation involving someone else.
For instance, most people say it is (usually) wrong to kill an innocent person, even if doing so prevents two other people from killing innocent people. In a sense, then, you have to place more disvalue on your killing an innocent person than you place on someone else’s killing an innocent person.
Another example is ethical egoism, the view that everyone should serve only their own interests. I.e., the fact that some state of affairs produces a benefit only gives S a reason for bringing it about if the benefit would belong to S.
Q: Are there agent-centered epistemic norms? These would be norms directing you to place different evidential value on your satisfying some condition than you place on someone else’s satisfying the same condition.
For instance, perhaps if you have the intuition that P, you would put more evidential weight on that than if you know that someone else has the intuition that P. Maybe you would believe that P in the case of your own intuition. And maybe when you merely learn that someone else intuits P, you would either fail to believe that P or believe it with less confidence.
So here are two views you could have:
Agent-Neutrality: If satisfying some condition C would give you prima facie justification to believe P, then if you know for certain that someone else satisfies C, you also get the same degree of prima facie justification to believe P.
Agent-Centeredness: In some cases, satisfying condition C would give you prima facie justification to believe P, but knowing for certain that someone else satisfies C gives you either less prima facie justification or no prima facie justification for P.
Note: Assume in all cases that you really know for certain that the other person satisfies C (e.g., has the intuition that P); you don’t at all suspect that they’re lying. You also don’t have any more reason to doubt their reliability than you have to doubt your own reliability, etc.
My claim: The correct view is agent-centered at the fundamental level. However, in practice people should mostly act very similarly to an agent-neutralist.
2. The Case for Agent-Neutrality
On its face, agent-centered epistemological views seem to imply that you should regard yourself as special: e.g., you should treat your own experiences as better evidence about the world than other people’s (otherwise identical) experiences. But what is so special about you?
Compare a parallel challenge to ethical egoism: We can ask the egoist, “What’s so special about you? Why should your interests be the only thing that matters?”
The egoist would reply: “There’s nothing special about me. I’m an egoist, not an egotist. I’m not saying everyone should serve my interests. I’m saying everyone should serve their own interests. No individual is special absolutely, but each individual has a special relationship to himself.”
Similarly, the epistemological egoist might say: There’s nothing special about me. I’m not saying everyone should believe (say) my intuitions. I’m saying everyone should believe their own intuitions.
Objection: These cases are disanalogous. Ethical egoism could be (and has been) defended by the idea that all value is agent-relative, i.e., things are good only for one person or another; nothing is good absolutely. Your own happiness, perhaps, is the only thing that is good for you, and hence the only thing that gives you reasons for action.
But the parallel idea that truth is observer-relative is a non-starter. (https://fakenous.substack.com/p/relativism-what-is-this-nonsense)
Since truth is absolute, if two people have many conflicting intuitions, then at most one can be reliably correct. If you generally believe your own intuitions, then you’re implicitly saying that you’re the one whose intuitions are reliable. If you do that whenever you disagree with anyone, then you’re implicitly saying that you’re special: You’re the one person in the world whose intuitions are most reliable.
3. For Agent-Centeredness
In my view, epistemology is fully agent-centered at the fundamental level. That is, the only basic source of evidence is one’s own experiences. In particular, if it appears to you that P, and you have no specific grounds for doubting that appearance, then you thereby have at least some justification for believing that P—and this is the only ultimate source of justification any beliefs ever have. If you merely find out that it seems to someone else that P, this does not intrinsically provide you any reason at all to believe that P.
But, starting from this “egoistic” perspective, you can of course still gain justification for believing things that other people apprehend. All normal people have plenty of evidence that other people are reliable about lots of stuff. E.g., you know very well that when another person seems to see a squirrel, there almost always is a squirrel in front of them. So that background knowledge, combined with knowledge that another person had a squirrel-seeing experience, gives you justification for thinking that there was a squirrel there.
Similar points apply to all other forms of cognition. Given your general background belief system (which was all justified by your own appearances), it is likely that many other people are equally (or even more) reliable as you in many areas. Sometimes you should defer to them. This is compatible with a maximally agent-centered view of the foundations.
Notice how this is different from your attitude to your own appearances: You do not need to first gather evidence that your own appearances are reliable indicators of reality, before forming beliefs based on those appearances. You can tell that you don’t need to do that, because if you did, then you could never have any justified beliefs. You’d have an infinite regress, because the only way to gather that “evidence” would be via other appearances of yours.
You start by trusting yourself; others have to earn your trust.
4. Disagreement
Notice the implications for disagreement: It is in principle possible for two people to have intractable, justified disagreements, even when each person knows all of the other person’s evidence. I could know that it seems to you that P and that you have no defeaters for P, while at the same time it does not seem to me that P. If I don’t have any preexisting belief in your reliability (nor any evidence that you’re reliable), then I could not justifiably adopt the belief that P myself, even though I could know perfectly well that you are justified in believing P. You could also know that this is the situation. So we could both know that the other person is justified, yet continue to disagree.
Maybe this sort of thing sometimes happens. E.g., maybe we lack justification for thinking that other people’s philosophical intuitions are reliable, and thus we often have justified disagreements resulting from differing philosophical intuitions?
I don’t think this happens very often, though. I think most of the time, we have good reason to trust other people’s judgments, but we dogmatically refuse to do so, often because we have emotional attachments to our own prior beliefs. (See http://www.owl232.net/papers/irrationality.htm.)