Here, I discuss how we know stuff and why skepticism is wrong.*
[ *Based on: “We Can Know” and “Response to Lammenranta,” pp. 26-38 and 56-60 in Problems in Epistemology and Metaphysics, ed. Steven Cowan (Bloomsbury, 2020). This was a debate about skepticism with Markus Lammenranta. ]
1. How We Know
Here is an approximate definition of knowledge: to know p, (i) You must believe P, (ii) P must be true, (iii) your belief must be justified, and (iv) there must be no further facts out there that, if you were made aware of them, would defeat your justification for believing P. This definition has some counterexamples in unusual circumstances, but it is good enough for our purposes.
Usually, when someone asks “how you know” something, they are interested in (iii), i.e., how you are justified in believing the thing.
The core of the answer to that is that justification derives from appearances: one is justified in believing that things are the way they appear, provided that one has no specific reasons for doubting that. (Reasons for doubt would themselves have to come from other appearances.) I call this view “Phenomenal Conservatism” (PC).
Appearances are a type of mental state, which you report when you say “so-and-so seems true” or “it appears that such-and-such”. This is not a belief but an experience that typically causes beliefs. There are several species of appearances, including (a) sensory experiences, (b) memory experiences, (c) introspective appearances, (d) intuitions, and (e) inferential appearances.
All beliefs that are prima facie candidates for rational beliefs are based on appearances, i.e., at some point a person must believe P because P seems correct to them. This includes beliefs based on arguments; these are not an alternative but just an example of appearance-based beliefs. For a person to be persuaded by an argument, the starting premises must seem true to them, and the conclusion must seem to them to be correct in the light of those premises (i.e., they must have an inferential appearance).
This also includes epistemological beliefs, such as beliefs about what makes beliefs justified. For this reason, it will be self-defeating to believe that appearance-based beliefs aren’t justified. If appearances aren’t a valid basis for beliefs, then they also aren’t a valid basis for the belief that appearances aren’t a valid basis for belief. But people who have that belief have it only because it seems to them that appearances aren’t a valid basis for belief. So either they are wrong, or their belief has no valid basis.
2. The Regress Argument
The Argument
The Regress Argument for global skepticism goes something like this:
A belief is justified only if the believer has a reason for it, and this reason must also be justifiedly believed.
No one can have an infinite series of reasons.
No belief can be justified by circular reasoning.
Therefore, no belief is justified.
A series of reasons must either start with something that there are no reasons for, or go on infinitely, or go in a circle. The first alternative is ruled out by (1), the second is ruled out by (2), and the third is ruled out by (3); i.e., none of these alternatives can provide justification. So no belief is justified.
This Isn’t a Serious Argument
Notice how radical this conclusion is. It applies to every belief whatsoever, and it denies that they can have any degree of justification whatsoever.
You should also notice, therefore, that the conclusion is self-defeating. If it is correct, then there is no justification for believing it, nor for believing any of the premises of the argument, nor for thinking that the premises support the conclusion, etc. Nor does the argument provide any reason for believing skepticism, so it’s unclear what the point of saying it would be.
You should also notice that the conclusion is absurd. E.g., it implies that there is no more reason to think that there are people living in Denver than there is to think that the Earth rests on the back of a giant turtle. That’s just ridiculous, and I’m not going to pretend to take that seriously, even if there are adults who pretend to believe it.
As G.E. Moore argued, in order for any argument to rationally persuade anyone of its conclusion, (the conjunction of) the premises of that argument must be more initially plausible than the denial of the conclusion. Otherwise, it will be more rational to just reject one of the premises than it will to accept them and endorse the conclusion. (One can’t rationally reject something that you have more confidence in on the basis of something that you have less confidence in.) But the negation of skepticism is more or less maximally initially plausible, so it’s basically always going to be more plausible to reject one of the premises of a skeptic’s argument than to accept the conclusion.
What Went Wrong?
The standard view is that the mistake is (1). Foundationalism holds (correctly) that some beliefs are justified in a way that doesn’t require reasons for them. E.g., if I’m in pain, I don’t need any reasons to believe I’m in pain; I just know immediately that I am.
More generally, it is rational to start out by believing whatever seems true to you, as long as you have no reasons for doubting it. You don’t need a reason to believe the appearances; you need a reason to doubt them.
Notice that this view avoids dogmatism: it allows you to revise your starting beliefs if you acquire reasons for doubting them. At the same time, it does not impose an impossible requirement of producing an infinite chain of reasoning before acquiring a belief.
By the way, (1) is really on its face quite a bizarre requirement to start with. It is to me a mystery why anyone has ever believed (1), other than that they maybe just didn’t spend much time thinking about cases. If someone is in pain, it is just totally bizarre to assert that the person can’t know they are in pain unless they make some kind of inference for that conclusion. Yet skeptics commonly just assert (1) as self-evident (which is also self-defeating, in addition to being bizarre).
3. Cartesian Skepticism
The BIV Argument
Maybe you’re a brain in a vat who is being stimulated in such a way as to produce an illusion of having a body and living in the real world. How can you know this isn’t the case? According to the Brain-in-a-Vat Argument:
Your beliefs about the world around you are justified only if you have justification for believing that you are not a brain in a vat.
You have no justification for believing that you are not a brain in a vat.
Therefore, you have no justified beliefs about the world around you.
Notice that the claim is not that we aren’t absolutely certain that our beliefs about the external world are true. The claim is that we have no justification whatsoever for thinking that. Why would that be the argument? Because you have no justification whatsoever for rejecting the BIV scenario. It’s not that you have a lot of evidence but it just isn’t quite conclusive. No, you have no evidence at all, not one single piece of evidence, against the BIV scenario. (All your evidence comes from your experiences, but whatever experiences you’ve had could just as easily be produced by the scientists stimulating your brain as be produced directly by the external world.)
The PC Response
The first problem with the BIV argument is that it has the burden of proof backwards. It assumes that we need positive evidence for rejecting skeptical scenarios in order to be justified in believing in the real world. But given Phenomenal Conservatism (see sec. 1), the Real World Hypothesis is the rational default: we rationally start by assuming that things are the way they appear, unless and until we get reasons for doubting that. The way things appear is that we’re living in normal bodies, moving around the real world. So we’re justified in thinking that until we acquire reasons for thinking that we are brains in vats. But there are no such reasons.
BIV Is a Bad Theory
The other problem with the BIV argument is that the BIV hypothesis is just a terrible theory. It is terrible, basically, because it is unfalsifiable. No matter what experiences you had, you could always hypothesize that they were produced by the BIV-stimulating apparatus, which means the theory can’t be tested. And what’s bad about that is that it means that there can’t be any evidence for the theory, which means that we in fact have no reason to believe it.
The Real World Hypothesis (RWH), by contrast, is much more testable, because it predicts that you should have a basically coherent series of experiences that could be interpreted as representing real physical objects. That is actually a fairly specific prediction, because the overwhelming majority of sets of experiences would not be like that (they would just be random noise). So the RWH is falsifiable, where the BIVH is not.
4. Why Discuss Skepticism
The task of an epistemologist, in my view, should be to explain the differences between theories like “the Earth rests on the back of a giant turtle” and theories like “the Earth was formed by accretion from a solar nebula”—not to declare that there is no difference. The epistemologist should help resolve reasonable disputes, such as whether string theory is justified, by providing realistic criteria for justifying a theory—not carry on insincere disputes about such things as whether I know how many fingers are on my left hand.
Why, then, discuss skepticism? Because it provides a test for epistemological theories: If you have a good, comprehensive theory of justification, it should be able to explain what’s wrong with skeptical arguments.
5. Semantic Disputes
Many self-described skeptics basically agree with everyone else about all the substantive matters — e.g., about what you should believe and why — but they have a semantic dispute with non-skeptics: namely, the skeptics think the word “know”, in English, has a much stricter meaning than other epistemologists think it has. I described my interpretation of “know” in sec. 1. These skeptics, however, think that to “know” something, in the ordinary English sense, requires that one have evidence that rules out, with 100% absolute certainty, every logically possible alternative to what you believe. And basically all epistemologists agree that you virtually never have such evidence.
So, what’s wrong with the skeptics’ semantic theory? Basically, I think the meanings of words are determined by their usage. Most words are used to group together certain things in the world that strike us as similar, and to distinguish them from other things that strike us as different. In the case of “knowledge”, we’re trying to group together cases like, say, my belief that I have 2 hands and mathematicians’ belief in the Pythagorean Theorem after a proof of it was discovered — and to distinguish them from cases like, say, the theory that the Earth rests on the back of a giant turtle, or the proposition that the number of atoms in the universe is even.
So the correct definition of “knowledge” should be something that captures the belief that I have 2 hands and the belief in the Pythagorean theorem, and does not include the belief that the Earth rests on the back of a turtle.
The skeptic’s definition of “knowledge” just completely and utterly fails to do that. I.e., it obviously fails to capture how “knowledge” is in fact used in English.
“it obviously fails to capture how “knowledge” is in fact used in English.”
Is that really the goal, to capture ordinary usage? Isn't ordinary usage floppy to the point of uselessness?
Maybe it isn’t knowledge that needs defining, but justification.
I used to think that to know something is to be willing to take actions that depend on it for success. But we can act on the basis of a guess, too. And a guess is pretty much the opposite of knowledge. There are probably also instances where people are not willing to act on the basis of what they claim to know, though good example does not come to mind.
I disagree that the skeptical argument is “unserious.” I could see why someone might believe that all of our beliefs need reasons. It’s almost always seen as silly to believe something for no reason (and for good reason). The only cases where believing in something for no reason makes sense are these esoteric debates in philosophy, so I can see why someone might make this mistake using some epistemological empathy.
It’s a creative argument that takes creativity of counterexample creation to respond to. The idea of pain being foundational wasn’t an idea I would have known about had I not read you (assuming I hadn’t one day read it from some other philosopher).