Here, I resolve Moore’s Paradox and identify the norm of belief.*
[* Based on: “Moore’s Paradox and the Norm of Belief” in Themes from G. E. Moore, ed. Susana Nuccetelli and Gary Seay (Oxford University Press, 2007), pp. 142-157.]
1. Moore’s Paradox
Suppose you ask me what the weather is like. I respond, “It is raining, but I don’t believe that it is raining.”
And just to be clear, I’m not using hyperbole or a figure of speech; I am not merely saying “I’m very surprised that it is raining.” I literally mean that I do not have the belief about rain at all; nevertheless, I am also asserting that in fact, it is raining.
There would be something wrong with my statement. It sounds contradictory, although it is not in fact contradictory — there is a possible world in which it is raining and at the same time I fail to believe that it is raining. Notice also that there would be nothing wrong with someone else saying, “It is raining, but Mike doesn’t believe that it is.” Yet this other person would be asserting the same proposition that I’m asserting with my “It is raining, but I don’t believe it.”
Given that the statement is logically consistent, what is wrong with it, and why does it sound contradictory?
This minor puzzle is “Moore’s Paradox” (so named by Wittgenstein, who thought this was G.E. Moore’s greatest contribution to philosophy).
Extensions
Note that the puzzle is not merely about assertion. It would also be absurd to think to yourself, silently, that it is in fact raining but that you do not believe that that is the case.
Also notice that there are many similarly paradoxical statements where you substitute something else for “I don’t believe it”. All of the following are defective (though only #2 is contradictory):
It is raining, but I don’t believe that.
It is raining, but that’s not true.
It is raining, but there’s no reason to think that.
It is raining, but I based that belief on a false premise.
It is raining, but my way of finding that out is unreliable.
It is raining, but my justification for that is defeated.
It is raining, but I don’t know that.
These sorts of statements are called “Moore-paradoxical statements”.
Epistemologists will notice something interesting about that list: The second conjunct in each entails that the speaker does not know the first conjunct. #1-6 just identify different ways of not knowing.
Failed Solutions
(a)
Wittgenstein thought the answer was that “believe” has a different meaning when used with the first-person pronoun: “I believe P” is really just a tentative assertion of P. Similarly, I guess, “I don’t believe P” is sort of a tentative denial of P (?). So the original Moore-paradoxical sentence really is contradictory: It asserts P but then tentatively denies P.
Problems: (i) This doesn’t explain the irrationality of Moore-paradoxical thoughts, (ii) it only explains #1 above, not #2-7, (iii) we could just invent a term, say “schmelieve”, which by stipulation is used to ascribe a belief to oneself, then imagine someone saying, “It is raining, but I do not schmelieve that”. This would still raise the original puzzle.
(b)
Some people say there is a linguistic rule that you’re not allowed to assert things you don’t know to be true. Sentences 1-7 violate this norm: either the speaker fails to know the first conjunct, or the second conjunct is false. Either way, the conjunction is not known.
Problem: This fails to explain the irrationality of Moore-paradoxical thoughts.
(c)
Some people say the problem is that beliefs (or at least, the kind of beliefs that are aptly expressed by assertions) are inherently self-conscious. Therefore, it would be impossible for the speaker to believe the first half of sentence (1) and also believe the second half. If you believe that it’s raining, you’ll also know that you believe this. Therefore, the sentence could not be sincerely asserted.
Problem: This doesn’t explain #2-7, since none of those other conditions are inherently self-intimating.
(d)
G.E. Moore addressed his own puzzle by hypothesizing that in asserting that P, one is generally implying that one knows that P. Thus, in sentence (1), the first half implies something that the second half denies.
Problem: This sounds right, but it remains to be explained why, in asserting that P, one must always be implying that one knows it. Why can’t one just cancel the implication by saying, “I don’t mean to imply that I know that”?
2. The Norm of Belief
Now to explain the idea of “the norm of belief”. But first,
The Norm of Assertion
Some social practices have constitutive norms. These are norms that you have to acknowledge in order to be engaged in the practice. Example: The constitutive norms of chess include that bishops can only move diagonally, that white and black take alternate turns, etc. If you don’t acknowledge those, then you’re just not playing chess. (Contrast rules of strategy like, “You should try to take control of the center of the board.” If you fail to acknowledge those, then you can still be playing chess; you’re just not very good at it.)
Notice, btw, that the existence of constitutive norms does not mean that no one ever violates them. You can violate the constitutive norms of chess and still be playing chess; you’re just cheating. But if you don’t even accept them as the rules applicable to your behavior, then you’re not playing chess.
We also have a social practice of asserting things. Maybe this also has constitutive norms. Some think it has the following constitutive norm: You’re only supposed to assert things that you know. This leads to the confusing slogan (which sounds like a category error), “Knowledge is the norm of assertion.”
(Btw, note that the norm is not “only assert things that you think you know”. It is “only assert things that you actually know”. So if you think you know P, but you’re wrong, and you assert P, then you violated the norm.)
If you reject this rule as applicable to your utterances, then you’re not making assertions. This might happen if, e.g., you’re telling a (fiction) story or reciting lines for a play.
The Norm of Belief
Maybe there is a somewhat similar kind of norm applicable to believing. Believing isn’t a social practice per se, but maybe there is nevertheless a norm built into believing, which is part of what makes a given mental state count as a genuine belief.
Maybe the norm is, “Believe P only if you know that P” (or, believe P only if that belief would count as knowledge). That could explain the knowledge norm for assertion: assertion is the conventional expression of belief; since belief is governed by a knowledge norm, so is assertion.
So that’s basically my idea. More precisely, I endorse
The Metacoherence Norm: Categorically believing that P rationally commits you, on reflection, to epistemically endorsing that belief.
The Endorsement Theory of Knowledge: Knowledge ascription is the most comprehensive epistemic endorsement.
To explain: “Categorical” belief is a particularly strong form of belief (as opposed to mere tentative belief); the kind of belief you have to have in order to count as knowing something. It’s also the kind of belief that is conventionally expressed by making an outright assertion. If you have such a belief, and you think about it, then you must endorse that belief; otherwise, you rationally have to give it up. (I inserted “epistemically” because you don’t have to endorse the belief prudentially or morally.)
About the endorsement theory: When you say that someone “knows P”, what you’re doing is endorsing their belief that P in all epistemic respects. I.e., you’re saying the belief is epistemically good and not in any way epistemically defective. (But only “defects” that would justify giving up the belief count.)
3. Implications
Resolving Moore’s Paradox
The above theory gives a satisfying explanation of Moore’s paradox. It applies to both Moore-paradoxical thoughts and assertions, and it explains all of #1-7.
Understanding Knowledge
This account explains the deep importance of knowledge, since all categorical beliefs are subject to the knowledge norm. You can also use it to evaluate candidate accounts of knowledge: if someone says that condition C is required for knowledge, you can ask whether, if a person outright believes P, they rationally must take their belief to satisfy C. Iff the answer is yes, C is a condition on knowledge.
The Certainty Puzzle
One thing that’s puzzling to me: certainty. It sounds weird to say,
It is raining, but it is not certain that it is raining.
And of course, many intuit that knowledge requires certainty, in which case (8) would entail (7), “It is raining, but I do not know that”, which we have said is unassertable & unbelievable.
But this is puzzling for fallibilists (who believe that you can count as knowing stuff despite being fallible about that stuff). Why would believing P commit you to taking that belief to be certain? Maybe if you believe P with maximal confidence, then you have to take P to be certain. But surely we have justified beliefs (of the sort that are apt for assertion and knowledge) in cases where P is less than 100% certain.
E.g., I believe the Earth is round. I also know that the Earth is round, and it’s fine for me to assert, “The Earth is round”. But, at least according to most epistemologists, it is not 100% certain that the Earth is round. There is some extremely tiny but nonzero probability that the Earth is really flat, or a cube, or doesn’t really exist, etc.
My best try at resolving this: Maybe there are two notions of certainty.
a. Maybe, in ordinary life, things are considered to be “certain” as long as they have good enough justification that it makes sense to disregard the alternatives.
b. But in some philosophical contexts, we have a stronger notion of certainty, where what is certain has literally the maximal possible justification, probability 1.
Maybe (8) is unassertable because it sounds like it denies certainty in sense (a). If something is uncertain in sense (a), you really should not categorically believe it. But if something is uncertain in sense (b), it’s often fine to believe it.
For instance, obviously, the Earth is round, and I know that, even though technically, there is a nonzero probability that the Earth is a cube. I think what I just said is fine and not Moore-paradoxical.
Moore's Paradox and the Norm of Belief
Regarding "P but I don't believe P" maybe it's relevant to consider some more complex examples"
To me it seems equally weird to say "P and Q but I don't believe P" or "P or (P and Q) but I don't believe P". However, if we increase the complexity of the left side enough it not longer seems weird, eg "The Riemann zeta hypothesis implies P but I don't believe P" seems fine even if it turns out that the Riemann zeta hypothesis is a logical truth (yes I know some ppl think mathematical truths aren't logical but, if necessary, we can make the sentence say if zeta is provable in ZFC then P).
I feel like this suggests there is at least an element of consideration of the complexity of the inference involved (and I think the same trick works on the belief side eg I don't believe P or Q is still weird).
(looks like first time substack ate this) Regarding the last puzzle does it raise a problem for your solution that "it's raining but there is only a 90% chance that's true" also seems weird?
My analysis in this is that the problem comes from the implied tension. To my ear it seems fine to assert, "It's raining and I'm only 90% sure." I think the difference is that we make the background assumption that there is some implied level of certainty required to make assertions in this context so if you don't think it reached that level why did you assert it and if you do why did you imply a tension.
I'd like to extend this to the certainty case by suggesting that saying "I'm not certain" in this context has the Gricean implicature that there is enough reason to doubt that you should take notice. But if that's so then asserting the claim in the first place violated the norm of helpfulness (if you knew I needed to know with probability at least 99% that it was raining why would you say it's raining when you only have probability 90%).