Here I explain when and why simplicity is a theoretical virtue.*
[*Based on: “When Is Parsimony a Virtue?” Philosophical Quarterly 59 (2009): 216-36.]
1. Two Questions About Simplicity
Almost everyone in both science and philosophy agrees that “simplicity is a theoretical virtue”: other things being equal, simpler theories should be preferred to more complex theories. But very few of these people have any idea why this might be true. On the face of it, it is a puzzling idea: Is the assumption that the world is intrinsically more likely to be simple rather than complex? Why would that be? If anything, the world seems more likely to be complex. If the world isn’t more likely to be simple, then why should we prefer simpler theories over more complex ones?
Related to that question is a second question: When is simplicity a theoretical virtue, e.g., for which theories, and which kinds of simplicity? Are philosophical theories just like scientific theories in this respect? For example, I have been told that I should believe that abstract objects don’t exist because this is “simpler” than the belief that they do exist. I.e., one theory asserts the existence of one fewer type of entity than the other. No one seems to know why this might matter, but apparently some people find that persuasive. My intuitive reaction has always been that this is completely irrelevant, as if someone had said that one should believe nominalism because the name of the theory starts with the letter “n”.
Perhaps if we knew why simplicity mattered, then we would know when it mattered.
2. Theories of Simplicity
2.1. The Empiricist View
Argument: Well, scientists have been using the criterion of simplicity for a while, and science seems to be doing pretty well. So probably, simplicity is somehow indicative of truth.
Sure, but this isn’t very helpful since it doesn’t tell us why simplicity might be indicative of truth. Let’s look for a more explanatory theory.
2.2. Boundary Asymmetry
Here’s a theory: There is a lower limit to how complex a theory can be, but no upper limit (you can always get more complex). In general, if you have a set of alternatives that is unbounded in one direction, the probabilities have to decrease in that direction, in order for the total probability to be finite. E.g., the series ½ + ¼ + 1/8 + … adds up to 1. If you try to make the numbers all the same, or if they increase as the series goes on, then the sum is infinity. Therefore, higher and higher levels of complexity must in general have lower and lower probabilities.
Another theory: There are typically more complex theories than simple theories that fit a given set of evidence. Therefore, even if the world is initially equally likely to be complex as to be simple, a randomly chosen simple theory is going to be more probable than a randomly chosen complex theory.
This is the best theory:
1 Simpler theories tend to have fewer adjustable parameters. These are quantities whose hypothesized values can be adjusted to try to accommodate the evidence. E.g., you can adjust your estimate of the gravitational constant to fit data about objects falling, planetary orbits, etc.
Note: You can generalize this notion from numbers to any assumption that can be varied (consistent with still having basically the same theory) to account for data. E.g., if a detective has a theory about a crime, the hypothesized motivation of the criminal could be considered an ‘adjustable parameter’.
(1) is true because the more entities you postulate, typically, the more causally or explanatorily relevant properties they can have.
2 Usually, the more adjustable parameters you have, the more possible patterns of data you can accommodate. E.g., if two people cooperated to commit a crime, there is more evidence that they might have left behind than if only one person was responsible.
3 The wider the range of data that a theory can accommodate, the lower the likelihood of the theory for any given set of data that it accommodates.
Here, “likelihood” refers to P(e|h), the probability of evidence e given theory h.
If you add up the likelihoods for each possible set of evidence, they have to sum to 1. (I.e., P(e1|h) + P(e2|h) + … + P(en|h) = 1, where e1, e2, … en are all the possible ways our evidence could have turned out compatible with h.)
That’s why, if h1 is compatible with a smaller number of possible evidence sets than h2 is, then on average, P(e|h1) must be > P(e|h2), for e’s that are compatible with both h1 and h2.
4 Therefore, other things being equal, simpler theories tend to have higher values of P(e|h), where h is the theory and e is some evidence that h accommodates.
5 Therefore, other things being equal, simpler theories tend to be better supported by evidence that they accommodate. (From 4.)
This follows from Bayes’ Theorem, which tells us P(h|e) = P(h)*P(e|h) / P(e).
Important note: This theory does not say that simpler theories are a priori more probable than more complex theories. (They could be more probable for some other reason, but this theory doesn’t imply that.) It only says that simpler theories tend to be more easily supported, provided that they accommodate our evidence. There’s no advantage for the simpler theory if it doesn’t accommodate the evidence.
In Copernican astronomy, the sun is at the center of the cosmos, with the Earth and planets orbiting it. So for each planet, you have the radius and speed of the orbit as adjustable parameters. (Later, Kepler turned the orbits into ellipses, which gives you another parameter for each orbit.)
By contrast, in Ptolemaic astronomy, the sun and planets orbit the Earth. However, the Earth was a little off-center (not at the center of the planetary orbits). Also, each of the planets was moving on a smaller circle, the “epicycle”, while moving in the bigger circle around the Earth. This gives you the radius and speed of each orbit, plus the radius and speed of the epicycle, plus the distance of the Earth from the center of the orbit, as adjustable parameters. So Ptolemy had more adjustable parameters and was more complex.
At the time people were debating these two theories, they did roughly equally well at accommodating the data about where you would see planets in the night sky. (https://inference-review.com/article/ptolemy-versus-copernicus)
But this predictive accuracy is more impressive for Copernicus, because he does it with fewer adjustable parameters. Ptolemy’s system, with its extra parameters, could in principle accommodate a much wider range of possible observations (i.e., we could have seen a wider range of possible patterns of planetary positions in the sky and still been consistent with Ptolemy’s basic framework). This means that Copernicus predicted our particular observations more strongly than Ptolemy, which means that our observations provided better support for Copernicus than Ptolemy.
3. The Failure of Simplicity in Philosophy
Great, now let’s think about how all this applies to simplicity of philosophical theories.
Nothingism is the view that nothing exists. On this view, there are neither physical nor mental things, neither abstract nor concrete things, neither actual nor potential things, etc. There is nothing whatsoever, not even this statement.
Obviously, nothingism is absurd. Nevertheless, is it true that there is at least some reason to believe it, namely, that it is the simplest metaphysical theory (even if this reason is outweighed by other reasons against)? Or is there in fact no reason at all to believe nothingism?
My sense is that there is no reason whatever to believe nothingism. This fits with the above idea that simplicity only matters given that a theory accommodates the data.
But now let’s turn to views that some philosophers actually hold.
3.2. Nominalism vs. Realism
Basically, “realists” think that abstract objects (e.g., the number 2, or the property of being red, considered as distinct from any particular red things) exist, whereas “nominalists” think that abstract objects don’t exist (whatever that means). Perhaps the single most popular argument for nominalism is that it is simpler than realism, as it posits fewer kinds of existing things.
My own intuitive reaction is that nominalism is absurd and its ‘simplicity’ constitutes precisely zero evidence for it. But let’s think in terms of the above theories of simplicity:
1 The Empiricist Account: I’m afraid philosophy does not have the kind of track record of success that science has. So, even if philosophers have been using the criterion of simplicity, we don’t have a great reason for thinking that criterion is truth-conducive in philosophy.
2 The Boundary Asymmetry Account: With respect to the question, “Do abstract objects exist?”, there are just two answers, “yes” and “no”. Either there are only concrete objects, or there are concrete objects plus abstract objects. There doesn’t seem to be any unbounded set of increasingly complex theories relevant to this issue that we need to assign probabilities to.
3 The Numerousness Account: Again, there just seem to be two relevant alternatives to consider. You could divide realism into sub-alternatives, say, immanent (Aristotelian) and transcendent (Platonic) realism. But you could also divide nominalism into several alternatives (to use David Armstrong’s taxonomy: predicate nominalism, concept nominalism, class nominalism, resemblance nominalism, and ostrich nominalism). Then it’s far from clear that there are more realist theories than nominalist theories. Anyway, that wouldn’t help us say whether realism in general was more or less likely than nominalism in general.
4 The Likelihood Account: Okay, this is the interesting one. First, what’s the “evidence” that the two theories are supposed to account for? I guess it’s common sense judgments using abstract terms, e.g., that lemons and the sun are both yellow; that 2 is more than 1; that some colors go together better than others do; etc. Realists claim that these propositions entail such things as “there is something that lemons and the sun have in common”, “2 exists”, and “colors exist”.
Now, what about the nominalist? Well, the core debate is actually about whether the nominalist can accommodate the above common sense judgments. Either they can, or they can’t.
a) If they can’t accommodate the common sense judgments, then nominalism just conflicts with the data, and therefore it is not supported at all by simplicity considerations, since simplicity only matters if you accommodate the data. (Recall the “Important Note” in sec. 2.4 above.)
b) If the nominalist can accommodate the common sense judgments, then they can accommodate any possible facts. E.g., if nominalism is somehow compatible with the fact that there are infinitely many prime numbers, then I don’t know what possible data it wouldn’t be compatible with. So simplicity is again completely irrelevant, since simplicity only matters because simpler theories tend to accommodate a smaller range of data. If you’re in an unusual case in which the simpler theory accommodates the same range of data as the more complex theory, then simplicity is irrelevant.
3.2. Other Philosophical Theories
That should give a sense of why I think appeals to simplicity are usually completely worthless in philosophy. But I’m not saying they must always be worthless. It depends on whether you can give a likelihood argument as discussed in 2.4 above. E.g., I think it’s an advantage for Rule Utilitarianism in ethics that it accommodates our ethical intuitions in a wide range of cases in a pretty simple way (if indeed this is true). That’s an advantage in comparison with, say, a pluralist deontological theory that posits 17 different, irreducible duties, which each have their own degrees of stringency in each different circumstance.
The case of dualism vs. physicalism is interesting, but it’s too complicated to discuss right now. I’ll leave that for you to think about.
I’m way out of my depth here. What would it mean for abstract objects to be real or unreal? Is unreal the same as imaginary?
If something is real, we can make propositions describing it, and be correct or incorrect. We can apply categories and logic to it.
But we can be correct or incorrect about imaginary things, too. “Harry Potter has a scar on his forehead” is true, but it isn’t about a real person, it is about a character from fiction. “Socrates had a big nose” could be cashed out in observations about a historical figure, while the description of Harry Potter gets cashed out by reading Rowling's canonical fiction. Harry Potter the person isn’t real, but Harry Potter the fictional character is real. Santa Claus is not a historical figure, but a cultural ... construction? Product? Entity? A character from a story?
What follows from assuming x is real, or x is unreal? What sort of evidence will give us clues?
Physical objects seem real because we can manipulate them. Johnson refutes Berkeley by kicking a rock.
A few things seem true because their logical negations lead to contradictions. Are they real because we can’t manipulate them?
Is an accusation of unreality just a disguise for questions about the referents of propositions, like “the present king of France is bald?” No such person exists, he isn’t real, so the statement is not a proposition.
Is the English language real? I don’t really know what that means.
Re: Nominalism vs Realism
Is there a prima facie reason to exclude the possibility that concrete things do not exist, but abstract things do? Every time we look closely at something that appears to be concrete, it always seems to turn out to be an abstraction -- an illusion created by a confluence of events happening at a much smaller scale.
Even when you get down to atoms and quantum fields, all we can really talk about is the statistical patterns we observe when we arrange events in certain ways.
Is it inconceivable that it's abstractions all the way down? Need there be a concrete base layer?