The Puzzle of Metacoherence
Here, I address a puzzle about coherence between beliefs and meta-beliefs.*
[ * Based on: “The Puzzle of Metacoherence,” Philosophy and Phenomenological Research 82 (2011): 1-21. ]
1. The Metacoherence Requirement
Suppose you have a firm belief — say, you’re convinced that God exists. You reflect on whether you know this to be true. There are three outcomes: either you conclude that yes, you do know it; or you conclude that you don’t know whether it’s true; or you suspend judgment about whether you know it.
In the first case, you’re doing fine as far as coherence goes. In the second case, though, you have an incoherence between your first-order belief,
“God exists”
and your meta-belief,
“I don’t know whether God exists”
I call this meta-incoherence. Propositions (1) and (2) are perfectly consistent with each other; it’s perfectly possible for God to exist but for you personally to fail to know this. Still, it seems irrational for you to think those two things. If you accept (2), it seems that you have to stop believing (1). I call this the Metacoherence Requirement (MR).
What about the case where, on reflection, you can’t tell whether you know, so you withhold judgment about whether your belief in God constitutes knowledge? This is less clear, but I think that here, too, you’d have to stop believing in God.
(For more, see: https://fakenous.substack.com/p/moores-paradox-and-the-norm-of-belief.)
Qualifications:
You don’t have to reflect. If you haven’t reflected on your belief that it’s raining, you may have no opinion about whether the belief counts as knowledge. That’s fine.
I’m using a strong sense of “belief” here. When epistemologists say that knowing P requires believing P, we mean “belief” in a strong sense (“categorical belief”): you can’t just tentatively believe P. Roughly, you must believe it without doubts; you must treat the possibility of ~P as ignorable. (This does not mean that you would say the probability of P is 100%, since contingent external-world propositions never or almost never have 100% probability.)
The meta-belief need not be so strong. When you reflect on your belief that P, you must (to maintain metacoherence) conclude that you know P. (If you can’t do that, you have to give up believing P.) However, I think you don’t have to categorically believe that you know P; the meta-belief can be a weaker sort of belief. (Why do I say this? Basically, to avoid a regress with an attendant threat of skepticism.)
Why accept the Metacoherence Requirement? Basically, intuition. It just seems somehow incoherent and irrational for me to categorically believe some proposition but also doubt that I know it.
2. The Puzzle
On the face of it, this is odd: “I know P” is in general a much stronger proposition than “P”. Therefore, it seems that it should be possible for me to have sufficient justification for [P] while lacking sufficient justification for [I know P]. By the meaning of “justification”, it seems that in such a case, I should believe P while not believing that I know P.
Does this analytically refute the Metacoherence Requirement? Not really. Essentially, MR treats doubts about [I know P] as defeaters for [P]. I.e., once you come to doubt that you know P, you are no longer justified in categorically believing P. This is coherent.
However, this looks like an amazing tool for a skeptic. All a skeptic need do is raise questions about whether you know P, and suddenly the requirements for justifiedly believing P go up. This would explain why entertaining skepticism often causes people to weaken their beliefs.
But can skeptics regularly deprive us of justified belief in this way? We would like to say not. It seems that most of our beliefs should survive a little reflection. But this means that, at least in most cases, when we have justification for P we somehow also happen to have justification for [we know P]. Why would this be true?
I see three main responses to this puzzle (other than rejecting Metacoherence):
Skepticism: Maybe none or almost none of our beliefs are justified.
Boostrapping: Maybe you can somehow use your justified belief that P to get to the justified belief [I know P].
Happy Coincidence: When we’re justified in P, we usually just happen to have an independent justification for [I know P].
Ultimately, I think it’s going to have to be #3.
3. Skepticism
Many traditional skeptical arguments can be viewed as invoking MR. Today, we usually represent skepticism as a specific philosophical thesis (say, the thesis that we know nothing; that we know no contingent, external-world propositions; etc.). But some ancient skeptics apparently thought of skepticism as a habit of suspending judgment, and the skeptical arguments as tools for inducing this suspense of judgment, rather than merely as tools to establish a specific thesis. Descartes’ skeptical arguments in the Meditations were also supposed to be tools for inducing suspense of judgment.
How do you get suspense of judgment out of arguments for the conclusion that you don’t know anything? Via the Metacoherence Requirement: Once you conclude that you don’t know that P, you must (according to MR) give up the belief that P.
I don’t have anything new to say here about what’s wrong with skepticism. I just assume (with the rest of the field) that skepticism is to be avoided.
4. Bootstrapping
The bootstrapping approach would be to use one’s belief that P (or the justification for that belief) to support the proposition that one knows that P. How this would work depends on your theory of knowledge. Let’s consider two theories.
4.1. Defeasibility
On the defeasibility theory, you know P if you believe it, it’s true, it’s justified, and you have no (genuine) defeaters for the belief that P. (A defeater for P is a true proposition that, when added to your beliefs, would result in your no longer being justified in believing P.) (There is a distinction between “misleading” and “genuine” defeaters, but that is too much complication to go into now.)
The main question regarding metacoherence would be: Given that you’re justified in believing P, how would you normally be justified in believing that there are no defeaters for P?
Perhaps you could use your justified belief that P to reject any rebutting defeaters for P (defeaters that would support ~P). But what about undercutting defeaters (defeaters that don’t support ~P but that call into question the reliability of your belief-forming method)?
There is no obvious way in which P would support [there are no undercutting defeaters for P]. Example: let’s say that if an object is illuminated by red light, this makes your perception of its color unreliable. So [that table is illuminated by red light] is an undercutting defeater for [that table is red].
You cannot use [that table is red] as a basis to reject [that table is illuminated by red light], because there’s no obvious reason why red tables would be any less likely than any other tables to be illuminated by red light.
So even if you like bootstrapping in general, there is no apparent way of bootstrapping from belief in P to belief in [I know P], on the defeasibility account of knowledge.
4.2. Reliability
According to reliabilism, you know P provided that you correctly believe it and your belief was produced by a reliable faculty. On this view, the main question regarding metacoherence would be: Given that you justifiedly believe P, why would you normally be justified in believing that your belief was reliably formed?
You could try bootstrapping: You form a series of beliefs using faculty F. On each occasion, you believe P_i, and you also introspectively believe that you formed that belief using faculty F. After doing this, say, 100 times, you note that you have 100 instances in which faculty F gave you the truth, and you know of no instances where it failed. Hence, you infer that F is reliable.
Most people find it highly counterintuitive to suppose that this might be good reasoning. What’s wrong with it?
Basically, this form of argument should never increase your estimate of the level of reliability of F. For each belief that you form using F, your confidence in that belief should be about equal (at least on average) to your estimate of the reliability of F. Then, after forming many beliefs by this method, your estimate of how many of them are correct should be about equal to that same value. So obviously, you can’t use that to increase your expectation of how reliable F is.
(What if you start out just being completely confident that F is completely reliable? Then you’ll estimate that 100% of your beliefs formed by F are true. But that still wouldn’t result in increasing your estimate of how reliable F is, since that estimate was already 100%. So there’s just no case in which you increase the estimate. So the bootstrapping reasoning isn’t a real argument for reliability.)
5. Happy Coincidence
The happy coincidence view is, again, that when we justifiedly believe P, we usually also have some independent justification for the further claim that our belief that P is warranted (i.e., it has whatever else a true belief needs to qualify as knowledge). We needn’t always have this independent justification; when we fail to have it, we will just be susceptible to a skeptical doubt that would make us have to give up our initial belief.
5.1. Example: Reliabilism
Think of what a reliabilist might say. When you justifiedly believe that there is a squirrel in front of you, this is because you have a reliable perceptual faculty that detected the squirrel. When you consider whether you know that there is a squirrel in front of you, you use some additional faculties, perhaps including your conceptual understanding, reasoning capacities, and philosophical intuitions. Luckily, those faculties are also reliable, and they tell you that you know there is a squirrel. So you also get justification for believing that you know there is a squirrel. Hence, you satisfy the Metacoherence Requirement without giving in to skepticism.
It’s entirely plausible that reliabilists would think all this. So that’s a reasonable solution to the metacoherence puzzle, to the extent that reliabilism in general is reasonable.
5.2. Example: Phenomenal Conservatism
Phenomenal conservatives think that when it seems to you that P, that (defeasibly) gives you some justification to believe P. So consider the squirrel sighting again. When you see the squirrel, it visually appears to you that there is a squirrel, which you have no reason to doubt, and this is how you justifiedly believe that there is a squirrel.
Then, when you think about whether you know that there is a squirrel, it also seems to you that yes, this does count as knowledge. You have no specific grounds for doubting that, so you’re also prima facie justified in believing that you know. So again, you satisfy the metacoherence requirement while avoiding skepticism.
Again, it’s entirely plausible that phenomenal conservatives would think all this. So this is a reasonable solution to the puzzle, to the extent that PC in general is reasonable.
Often, when you tell people about skeptical scenarios, people have the reaction that those scenarios are ridiculous. This can be thought of as a kind of epistemological intuition: roughly, we intuit that those scenarios are rationally dismissable. This isn’t trivial; it could have been that, even though we initially seemed to see squirrels, tables, and so on, when we considered skeptical scenarios the scenarios would strike us as plausible possibilities. In that case, we would have a meta-incoherence, which would rationally force us to give up our categorical beliefs about the squirrels, tables, etc. Luckily, our intuitive sense of the ridiculousness of skeptical scenarios enables us to form justified meta-beliefs that cohere with our ordinary beliefs about the world.