Vagueness: The Sorites Paradox

Imagine we have 0 grains of sand. Do we have a heap of sand? Of course not! Well, what if we add one grain? We obviously still do not have a heap. Okay, what if we add one more? One more after that?

No matter how many grains of sand we have, adding just one more will never turn a non-heap into a heap. This is called the “tolerance principle,” and it is the defining feature of vague properties. It says that a small enough change can never alter the applicability of a vague property.

Say you have a red shirt. Change the frequency by an imperceptible amount. Obviously, the shirt is still red. Take someone who is sober. One ml of beer will not make that person drunk.

A problem appears when we compound these small increments. Here’s a version of the argument:

1) 0 grains of sand is not a heap (premise)

2) 1 grain of sand is not a heap (by #1 & tolerance principle)

3) 2 grains of sand is not a heap (by #2 & tolerance principle)

10001) 10000 grains of sand is not a heap (by #10000 & tolerance principle)

Welcome to the sorites paradox (“sorites” = “heap” in Greek), the argument that allows us to prove that a 90-year-old woman is a child, a blade of grass is red, and Danny DeVito is tall. It was invented by Eubilides sometime in the 4th century BCE, when he also invented the Liar and a few other paradoxes. Though it was largely ignored by ancient, medieval, and early modern philosophers, it has seen a huge surge of interest since the mid-1970’s. It is arguably the most difficult paradox facing contemporary logic. There’s nothing even close to a consensus on how to deal with it.

Acknowledging that vague properties are vague does not solve the problem

It’s tempting to say something like, “well, the properties in question are vague, that’s why this is happening. Logic is supposed to deal in precise terms.”

This is only partially correct. Logical systems are precise mechanisms designed to avoid any vagueness and ambiguity. That doesn’t mean that the objects of logical analysis must be precise. Consider the following logical rule: if p&q is true, then p is true. That is a perfectly precise rule. You can substitute whatever you like for p and q – vague objects, fictional objects, whatever.

Of course, we can create an idealized logic that does not allow vague properties. In such a logic, we couldn’t say “the apple is red,” because “red” is vague. We could make up a precise version of “red” with a precise range of frequencies that qualify. Let’s call this red1. Red1 doesn’t create a sorites problem because the tolerance principle is not true of it. There is some precise frequency such that by changing it a tiny amount, the resulting frequency is not red1. The sorites progression gets blocked along with the paradoxical result.

The problem with this is that “red1” is not what we mean when we say “red.” If two colors are imperceptible to the eye, they are the same color. That’s an important feature of the word “red.” The vagueness in our language is not an accident or deficiency. It’s necessary.

The issue is compounded when we consider that most of our properties are vague. Being old, being a person, being alive, being a plane, being fat, being liquid, being myself – for any of these, a version of tolerance applies. If logic is going to represent validity as we actually use it and understand it, it can’t disallow vague predicates.

The solutions

There are many solutions to the paradox. I can’t mention them all here. The ones I can mention, I can’t go into with any detail. Nonetheless, I can give a quick overview of the more significant attempts.

Supervaluationism: A “sharpening” is any acceptable precisification of a vague property. For example, “is 5 ft. 10 in. or taller” is a sharpening of “is tall (for a human).” “Contains 213 grains of sand or more” is a sharpening of “is a heap.” Now, some precisifications are acceptable and some aren’t. “Contains 2 grains of sand or more” is not a precisification anyone would accept, and so does not count as a sharpening of “is a heap.”

Supervaluationism says: if the property applies under all acceptable sharpenings, then the application of the property is “supertrue.” Consider person A (7 ft. 4 in. tall) and person B (5 ft. 11 in. tall). “Person A is tall” is supertrue. There is no reasonable sharpening under which it is false. However, “person B is tall” is not supertrue. There are reasonable sharpenings (eg, “6 ft. or taller”) of “is tall” under which person B does not qualify.

For supervalutationists, truth is supertruth. In other words, if the property applies under all sharpenings, then it applies; if it doesn’t, it doesn’t.

Three-valued logic: We normally think of propositions as being either true or not true, not anything in between. But what if there were a third, intermediate value? Then we could say that “1 grain of sand makes up a heap” is false, “10000 grains of sand make up a heap” is true, and “80 grains of sand make up a heap” has an intermediate truth value.

Fuzzy logic: Why stop at three values? Fuzzy logic creates a spectrum, represented by the real number line from 0 to 1, of degrees of truth. Total falsity is represented by a truth value of 0, total truth is 1, and all real numbers in between represent an infinite spectrum of degrees of truth. So, we might say: “Immanuel Kant is a philosopher” has a value of 1, “Max Weber is a philosopher” has a value of 0.76, “Mike Tyson is a philosopher” has a value of 0.12108, and “my toaster is a philosopher” has a value of 0.

Epistemicism: The tolerance principle is false. For any vague property, there is a precise cutoff at which the property applies or doesn’t. We just can’t know where the precise cutoff is. In other words, there is some exact number of grains of sand, n, such n constitutes a heap, but n-1 does not. Maybe that number is 65. Maybe it’s 112. It’s an objective and precise number that we can never discover.

Dialetheism: Dialetheism is the view that some contradictions are true. Applied to vagueness, this might look like “1 grain of sand makes up a heap” is false (only), “10000 grains of sand make up a heap” is true (only), and “80 grains of sand make up a heap” is both true and false at the same time.

Contextualism: This view is best illustrated via the forced march sorites, which is just a sorites progression conducted by a sadistic interlocutor:

Imagine a strip of little patches placed out in front of you. The leftmost is unquestionably red, the rightmost is unquestionably yellow. In between, the patches each become a little yellower as they move from left to right. The color shift between each patch is so small, that you can’t notice the difference between one and the next one.

Now imagine you’re asked the color of the first patch. You say “red.” Next one? “Red.” Next one? “Red.” At some point, however, you’ll have to change your answer. You may say “uhh, kinda red?” or “orangeish red” or whatever. Your interlocutor responds: “But you said the previous patch was red. Don’t the two look exactly the same?” You’ll have to say “yes.” Perhaps you’ll change your answer and say the previous one was actually “orangeish red” as well. What’s going on here?

The contextualist says that we always evaluate vague predicates under some context. A context is similar to a sharpening. One difference is that contexts never distinguish between adjacent members of a sorites progression. Unconscious context shifts explain why we must change our answers even though two adjacent patches always look the same.

This is how it plays out: you evaluate the patches using the same context until patch 38, and you call all of them “red.” By the time we get to 39, you’ve stretched out your context. Without realizing it, you switch to a new context, under which patch 39 is “orangeish red.” Under this new context, patch 38 is also “orangeish red.” In fact, under this new context, the patches don’t look “red” again until you go all the way back to, say, patch 26.

So, contexts are like temporary vague property evaluation schemes that shift without us realizing as the objects we’re evaluating change.

Precise cutoffs

All of the above solutions work just fine, to some extent. They all stop the sorites progression. There are many debates about the extent to which they change up logic, what the pros and cons of those changes are, and how justified they are.

Whatever their various virtues, these solutions all share a major problem: they all involve precise cutoffs.

What’s so counterintuitive about denying the principle of tolerance is the existence of a precise cutoff. If we deny tolerance for “is a heap,” then we have to accept that there’s some specific number of grains of sand that turns something from a non-heap to a heap. That’s insanely counterintuitive. It’s just not how the concept “heap” works.

All the solutions move and change the cutoff in different ways, but none manages to get rid of it. Supervaluationists have to accept that there’s a precise number of grains of sand, n, such that “n is a heap” is supertrue, but “n-1 is a heap” is only true under most, but not all, sharpenings, and so is not supertrue. Three-valued logic has a precise cutoff between “n is a heap” is true and “n-1 is a heap” is of intermediate truth value. Fuzzy logic has a precise cutoff between “n is a heap” has truth value 1 and “n-1 is a heap” has a truth value less than one (maybe 0.9999). Epistemicism’s whole point is that there is a precise cutoff. Dialetheism features a cutoff between “n is a heap” is true (only) and “n-1 is a heap” is both true and false. Finally, contextualism features precise cutoff points at which contexts shift.

For this reason, none of these solutions quite solves the paradox. The whole reason the paradox is so difficult to deal with is because it’s so counterintuitive to accept precise cutoffs for vague predicates.

Yet, if we think about the forced march sorites, it seems impossible to avoid cutoffs. We’re asked the same question at each point – what color is this patch? At some precise point, we have to change what we say. Otherwise, we’ll be forced to say that the final (yellow) patch is red, which is absurd. That precise point is a cutoff.

Perhaps there are solutions that have yet to be discovered, which can do away with precise cutoffs? Perhaps the key is rethinking how we think about logic, language, or evaluation. I’ll write more about this paradox in the future, and do a fuller survey of each of the solutions. For now, I’ll just admit I have no idea what the best solution is.

 

1 thought on “Vagueness: The Sorites Paradox”

  1. There is “the rule of thumb” or semi-precision. The common sense approach to evaluating or estimating vagueness.

Leave a Reply

Your email address will not be published. Required fields are marked *