Probability & Coherence Justification
Here I’ll tell you one of my problems with the coherence theory of justification.*
I. Coherentism vs. Foundationalism
The Coherence Theory of Justification (“Coherentism”) holds that beliefs are ultimately justified by the ‘coherence’ of one’s belief system – i.e., a belief is justified because it is supported by other beliefs in your system, and the overall system fits together well, with lots of mutually-supporting beliefs and few anomalies.
This is in opposition to Foundationalism, which holds that some beliefs are justified in some way that doesn’t require reasons (the “foundational beliefs”, with “foundational justification”), and all other beliefs depend on those starting beliefs for their justification. (Example foundational beliefs: “I exist”, “I’m in pain now”, “2>1”. Maybe “there’s a hand in front of me now.”)
Note: I take the Coherentist view to be that coherence alone suffices to justify our beliefs, without the need for any degree of foundational justification. This is important.
What about “Foundherentism”, the view that you can have some degree of foundational justification, but then this justification gets amplified by coherence? This is supposed to be a hybrid of Coherentism & Foundationalism. But I think it is really just foundationalism. No foundationalist has a problem with foundationally justified beliefs supporting each other and thereby increasing each other’s justification.
II. The Probabilistic Argument for Coherence
Why would one think that coherence alone could justify beliefs? Isn’t this like endorsing circular reasoning?
Imagine that you’re a detective interviewing witnesses to a bank robbery. You interview them all separately, before they’ve had a chance to talk to each other. If multiple witnesses agree on certain specific details about the crime (e.g., the robber had spiked green hair), you’ll conclude that those details are very probably correct, since it’s otherwise a bizarre coincidence that they should agree. If their stories fit together pretty well in general, you’ll conclude that the general picture you get from them is by and large correct. This is true even if you antecedently had no opinion about how reliable the witnesses were.
This sort of example is used by coherentists (e.g. Laurence BonJour, Catherine Elgin) as an analogy for how you should conclude that your belief system is by and large accurate if your many beliefs from different sources fit together well.
III. Objection
My basic objection: The probabilistic argument only works if you have at least some foundational justification. So it doesn’t succeed in motivating coherentism as against foundationalism.
Explanation: In terms of the witness scenario, what would it mean to have at least some foundational justification? I think it would mean this: that when you interview a single witness, and the witness says A, the probability of A, for you, goes up. So P(A|W_A) > P(A) (i.e., the probability of A given that a witness says that A is greater than the initial probability of A). That’s the analog of saying that some of our beliefs have non-coherence-based justification.
So to illustrate coherentism, we should stipulate that, in the witness scenario, P(A|W_A) = P(A), for any given witness. But in that case, even if multiple witnesses all assert A, that gives you no more reason for concluding that A is true than for concluding that A is false. Given that P(A|W_A) = P(A), we must be saying that we initially regard the witnesses as at best random guessers -- since the witness’ assertion gives us no information about the truth value of A, that means the witness is no more likely to assert A if A is true than if it is false. In that case, even if multiple witnesses assert A, that makes it no more likely that A is actually true. The hypothesis that A is true would not explain why the witnesses asserted A, any better than the hypothesis that A is false. (You can do a probability calculation to verify this for a simple model. But I’ll leave it at an intuitive level here.)
Reply #1
You might say: At the start, we weren’t sure if the witnesses were reliable, but we weren’t completely convinced that they were unreliable either. So if they wind up agreeing with each other, shouldn’t we raise our credence that they’re reliable, and thence raise our credence in what they assert?
But in order to have P(A|W_A) = P(A), we have to have the possibility of reliability equally balanced by the possibility of anti-reliability. I.e., in order to avoid attaching any ‘foundational’ credibility to each witness’ testimony, we need to start out regarding them as equally likely to be reliable truth-reporters as to be reliable liars (i.e., people who systematically report the opposite of the truth). So when they agree with each other, you should lower your credence that they’re random guessers, but increase your credence that they are either reliable or anti-reliable. The anti-reliability hypothesis explains their agreement equally well as the reliability hypothesis.
Reply #2
You might think: But even if you’re anti-reliable, there are many different false claims that you could make, whereas if you’re reliable, there is only one thing you can say, the one truth. So if the witnesses are anti-reliable, they should generally not agree; if they’re reliable, they should generally agree. So if you find them agreeing, then – even if you initially considered reliability and anti-reliability equally likely – you should conclude that they’re reliable.
Problem: this view gives you the analog of foundational justification, i.e., it implies that P(A|W_A) > P(A). Suppose we start out with P(reliable) = P(anti-reliable) = P(random guesser) = 1/3. And let’s suppose there are 10 possible ways for the world to be, call them A1, A2, …, A10. Then the initial probability of, say, A1, should be 1/10. But the probability of A1 given that a single witness has asserted A1 is:
P(reliable)*P(A1 is true if witness is reliable) + P(anti-reliable)*P(A1 is true if witness is anti-reliable) + P(random)*P(A1 is true if witness is random guesser)
= (1/3)(1) + (1/3)(0) + (1/3)(1/10) = 0.367
Which is greater than the initial probability of A, 0.1. So that’s like having a degree of foundational justification.
Possible fix: To avoid having the equivalent of “foundational justification” in the story, we have to say that each way of being anti-reliable (i.e., each alternative, false answer that you could be biased toward) is equally likely as the reliability hypothesis. But of course, this also gives us, again, the result that it doesn’t matter how many witnesses agree on A1; A1 still won’t be confirmed, because the hypothesis that A1 is true would not explain why they agreed at all (they’d be equally likely to agree on A1 if A1 was false).
IV. Conclusion
Coherence justification is parasitic on foundational justification. So coherentism isn’t a viable alternative to foundationalism.
[*See “Probability & Coherence Justification,” Southern Journal of Philosophy 35 (1997): 463-72.]