The Problem of Defeasible Justification
Here, I explain the problem of defeasible justification.* This is a philosophical puzzle that applies to almost all our knowledge & justified beliefs (all those that have fallible justification); Hume’s problem of induction and the problem of external-world skepticism are special cases. (Caution: what follows is pretty abstract-logicky.)
Defeasible justification is the kind of justification that is capable of being defeated, i.e., it leaves open the possibility that you will acquire some further information such that, when that new information is added, you’ll no longer be justified in believing what you’re now justified in believing. This is basically equivalent to fallible justification or justification in which the evidence doesn’t deductively entail the conclusion.
1. Some Skeptical Problems
Skeptics like to attack defeasible justification, trying to show that the things you think are defeasibly justified are not justified at all. Most epistemologists take their task to be to figure out how the skeptics are wrong. So here are some skeptical puzzles.
1.1. The Brain in the Vat
How do you know that you’re not a brain in a vat? It seems that you have no evidence against the BIV hypothesis, since all your evidence about the external world consists of your sensory experiences, and the BIV would have the same sensory experiences. Since you have no evidence against the BIV scenario, you have no justification for rejecting it, so you have no justification for believing that anything you perceive is real.
1.2. The Grue Problem
Nelson Goodman introduced the predicate “grue”, where
X is grue iff either (i) X was first observed before 2100 A.D. and X is green or (ii) X was not first observed before 2100 A.D. and X is blue.
For example, all the emeralds that human beings have ever found have been grue, since they were all observed before 2100, and they were all green. The new emeralds that we dig out of the ground after 2100 A.D., however, will presumably not be grue (they won’t be observed before 2100, so they’ll fail condition (i), and they also won’t be blue, so they’ll fail condition (ii)). Or so one would assume.
Normal people assume that, based on our past observations of emeralds, we should infer that all emeralds are green, and not that all emeralds are grue. Goodman introduced the concept “grue” to make the point that, when you observe many A’s that are F, you don’t necessarily have evidence for “All A’s are F”; no matter how many grue emeralds you observe before 2100, you have no evidence that all emeralds are grue.
Goodman was not trying to make a skeptical point. Nevertheless, you could imagine a skeptic showing up and seizing the opportunity. This skeptic says: “We have two hypotheses, ‘All emeralds are green’ and ‘All emeralds are grue’. Both hypotheses fit our evidence equally well. Therefore, there’s no reason to prefer the first hypothesis over the second. Therefore, there’s no reason to think that emeralds discovered after 2100 A.D. will be green.”
If the grue-skeptic’s argument works, then notice that it applies to all inductive inferences. For any inductive conclusion you like, you could introduce a grue-like predicate, then formulate a hypothesis that exactly fits all our evidence but predicts what we would intuitively describe as very different things happening in the future from what has happened in the past. The skeptic can then argue that we have no reason to reject this alternative hypothesis, so we have no reason to accept the normal inductive conclusion. This isn’t how the problem of induction is usually formulated, but I think it’s a recognizable formulation of the problem of induction.
Note that none of this involves introducing a super-strong notion of justification. The skeptic is not merely saying that we lack absolutely conclusive reasons for our ordinary beliefs. The skeptic is saying that we have no reason whatsoever for rejecting (a) the BIV hypothesis, or (b) the grue hypothesis. It’s not that we’ve got really strong justification that just falls short of absolute certainty; it’s that we have no evidence at all against those hypotheses.
2. The General Problem
So here is the general, formal description of the problem. Let’s say you have some evidence e and hypothesis h, where e would normally be thought of as defeasibly justifying h. Then there will exist an alternative hypothesis h’, which entails e but is incompatible with h.
The BIV scenario is an example of this, where e is a description of all your sensory experiences, h is the hypothesis that you’re perceiving the real world normally, and h’ is the BIV hypothesis. The grue hypothesis is another example, where e is the fact that all observed emeralds thus far have been green, h is the hypothesis that all emeralds are green, and h’ is the hypothesis that all emeralds are grue.
Given that e only (at most) defeasibly supports h, there must exist such an h’. One way to think of it: by stipulation, e doesn’t entail h. That means that there are possible worlds in which e holds but h is false. Let h’ be a hypothesis describing such a world. (The cheapest way of generating such an h’ is to stipulate h’=(e & ~h).)
Okay, so there is always a competing h’. What’s the problem with that?
Premise 1: If you have two incompatible hypotheses, then you’re justified in believing one of them only if you have some reason (independent of the first hypothesis) for rejecting the other hypothesis.
Premise 2: If a hypothesis entails some evidence, then that evidence isn’t a reason to reject that hypothesis.
Comment: How could the correctness of a theory’s prediction be evidence that the theory isn’t true? If anything, it should be evidence for the theory. There’s also an obvious probabilistic argument: if H entails E, then obviously P(H|E) >= P(H).
So we have evidence e, from which we want to (defeasibly) infer h. According to Premise 1, we need a reason for rejecting h’. According to Premise 2, e itself does not provide any such reason.
So we would need some other evidence, call it e’, that gives us a reason to reject h’. Now, the combined evidence, (e & e’), either entails h or it doesn’t. If it does, then we have indefeasible justification for h. If it doesn’t, then (by the reasoning given above), there must exist a hypothesis h’’, such that h’’ entails (e & e’) yet h’’ is incompatible with h.
So we’ll now need a reason to reject h’’. And you can just repeat all the above reasoning. This leads us on an infinite regress.
So now it looks like there are only two options: we actually have indefeasible justification for h, or we have no justification for h, since we can’t complete the infinite regress. Conclusion: defeasible justification is impossible.
For a simpler argument: Just start out by stipulating that E is a proposition describing all the evidence you have. If E doesn’t entail h, then there exists an h’ that entails both E and ~h. By Premise 1, you need a reason to reject h’. By Premise 2, E doesn’t furnish any such reason. Since E includes all your evidence, you have no evidence that provides a reason to reject h’. So you have no reason to reject h’. So you’re not justified in believing h.
3.1. Only Explanations Count for Premise 1
First, you might think that we moved too fast in claiming that, when e fails to entail h, there is always an alternative theory, h’, that entails both e and ~h. The easiest way to “establish” that claim is to let h’ = (e & ~h). But you might think “e & ~h” is a pretty bogus example of a “theory”, perhaps because “e & ~h” doesn’t genuinely explain why e is the case, as h typically does explain why e is the case. (Ex.: The Theory of Gravity explains why we see things fall to the ground, but “things fall to the ground and the Theory of Gravity is false” does not explain why things fall to the ground.)
And perhaps Premise 1 only applies to competing explanations of the evidence. I.e., if h and h’ are competing explanations of the evidence, then you need an independent reason to reject h’ before you accept h. But if h’ isn’t an explanation of the evidence at all, perhaps you don’t need this; perhaps you can reject h’ on the basis of h itself. So I could say that my reason for rejecting (e & ~h) is that the second conjunct of it is false, and my reason for thinking so is that h.
In response, the skeptic can generally think of better examples of h’ than just “e & ~h”. E.g., the brain-in-a-vat scenario really does seem like a candidate explanation of our evidence. (Note: but only if you regard our evidence as consisting of facts about experiences; see https://fakenous.net/?p=2911 for discussion.) Likewise, if you think that “all emeralds are green” explains our seeing many green emeralds in the past, then it’s unclear why “all emeralds are grue” wouldn’t also be a candidate explanation of that. And in general, skeptics can probably think of an alternative explanation of the evidence almost any time you have a non-deductive inference.
3.2. Foundational Rejection
A second response would be analogous to the foundationalist’s response to the infinite regress argument in epistemology. Foundationalists say that you don’t need an infinite series of reasons for your beliefs, because some things are just intrinsically credible; you’re justified in believing them without a reason.
The application to the problem of defeasible justification would be to say: Premise 1 is false; you don’t always need a reason for rejecting h’. Instead, sometimes you can “foundationally” reject h’, i.e., you just find h’ intrinsically implausible. A Bayesian might say that you just assign a low prior probability to certain hypotheses. Since it’s a prior probability, that means there’s no evidence supporting that assignment; you just start out like that. E.g., you just start out assigning really low probability to the brain-in-a-vat hypothesis.
3.3. Necessary Reasons
The third solution I can think of would start by modifying Premise 2. Again, Premise 2 said that if H entails E, then E can’t be a reason to reject H. The modified Premise 2 would be: If H entails E and H and E are both contingent, then E can’t be a reason to reject H.
Why add this qualifier? Recall two quirks about entailment that you learned in intro logic: first, that if x is contradictory, then x automatically entails y, no matter what y is; second, if y is a necessary truth, then x automatically entails y, no matter what x is. (Anything follows from a contradiction; anything entails a necessary truth.)
These two quirks mean that Premise 2 (as originally formulated) is false. Example: (P & ~P) entails P. Yet P is still a reason to reject (P & ~P), since P is a reason to reject the second conjunct. Another example: the Law of Excluded Middle is a reason to reject the Copenhagen Interpretation of Quantum Mechanics; yet the Copenhagen Interpretation entails the LEM (since LEM is a necessary truth, and anything entails a necessary truth).
So say we accept the qualification: Premise 2 only holds when the evidence & hypothesis are contingent. Now the solution to the problem of defeasible justification would claim that, when e defeasibly justifies h, we always have some necessary truths that provide reasons for rejecting h’. What might these necessary truths be? Well, for example, the proposition [h is simpler than h’] will, if true, be a necessary truth, and we might think it’s a reason to prefer h over h’. Or perhaps [h has a higher a priori logical probability than h’]: that’s also a necessary truth (if true at all) and a reason to prefer h over h’.
This sounds fine to me. Note an interesting implication: we have a priori, necessary evidence for certain contingent propositions. E.g., “I’m not a BIV” would traditionally be considered a contingent, empirical truth, yet we (on the current solution) have a priori necessary grounds to believe it.
I like both of the last two solutions, but a lot more work remains to be done on them.
*Based on “The Problem of Defeasible Justification,” Erkenntnis 54 (2001): 375-97.