The “Simulation Argument” was first proposed by Nick Bostrom in his 2003 paper “Are You Living in a Computer Simulation?“. While typically taken to argue that we are, in fact, living in a simulation, Bostrom’s argument actually argues that one of the following three possibilities obtains (quoting from Bostrom’s paper):
1 – “The human species is very likely to go extinct before reaching a ‘posthuman’ stage.” (Elsewhere, Bostrom refers to the “posthuman stage” as “technological maturity”, which is the term I’ll use here.)
2 – “Any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof).” (Bostrom later refers to these as “ancestor-simulations”.)
3 – “We are almost certainly living in a computer simulation.”
The three possibilities suggest the rough outline of the argument. If we don’t go extinct before reaching technological maturity (option 1), then we almost certainly will reach technological maturity (ie, a stage at which we’re able to run ancestor-simulations). At that point, it’s possible there’s some reason we likely won’t run these simulations despite being able to (option 2).
But if there isn’t, and there’s no reason to think it unlikely for a species like ours to reach this point, then we have to consider the possibility that this has happened before and that we’re a simulation. The remaining step is to realize that any given species might create thousands upon thousands of these ancestor-simulations and that each of those may create thousands of their own. So it’s not just that we might be a simulation, but rather, that statistically speaking, we almost certainly are (option 3).
It’s a powerful argument. And, to Bostrom’s credit, he doesn’t overstate the case for option 3. He doesn’t lean heavily on any of the options. Even so, there are quite a few important possibilities that Bostrom fails to take (sufficiently) into account.
A version of Option 1 is way more likely than it seems
The way Bostrom sets up Option 1 is, upon reflection, rather odd. The high-level structure of his trilemma is:
Option 1: We’ll never be able to do it.
Option 2: Even if we can, we won’t do it.
Option 3: It’s been done.
But there are reasons we may never be able to do it other than going extinct – Bostrom unnecessarily weakens his argument by making Option 1 much narrower than it need be. Most obviously, it may well turn out that it’s impossible to simulate consciousness.
Bostrom does drop a dime on this consideration in the original paper:
A common assumption in the philosophy of mind is that of substrate-independence…. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.
Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.
So Bostrom does state the assumption upfront; the question is whether he’s justified in treating it as practically a given. To be blunt, “although it is not entirely uncontroversial” is a pretty big understatement. While I’m not terribly moved by John Searle’s Chinese Room Argument against substrate-independence, there are those who are, and there’s certainly no consensus on precisely how to refute it.
But more importantly, there’s no consensus on whether consciousness is a property of any kind of matter whatsoever. For better or worse, ever since Chalmers’ formulation of the “hard problem“, there’s been a major resurgence of dualism, not to mention panpsychism. If either of these turns out to be the case, it becomes a lot less obvious whether consciousness can be recreated in the way the simulation hypothesis requires.
There’s also Plantinga’s argument against evolutionary naturalism, and related arguments from epistemic self-defeat, that cause major difficulties for the naturalism that Bostrom off-handedly presumes.
The fact is, we simply aren’t at a point where there’s any consensus whatsoever that consciousness is the sort of thing that can be recreated via a computer. This makes the likelihood that we’ll never be able to do so much higher than the likelihood of the narrower option that we’ll go extinct before being able to.
Option 4: the argument is wrong
As far as I can tell, quibbles aside, Bostrom’s argument is sound. But maybe I’m missing something. Consider the following possibilities:
- There is a fairly simple mistake in the argument that the philosophical community has, for some reason, failed to notice.
- Our reasoning abilities are insufficiently advanced to notice the mistake in the simulation argument.
- Our reasoning abilities are insufficiently advanced to notice why Bostrom’s entire conceptualization (as well as that of all or most current philosophy) is utterly misguided.
- There’s an important metaphysical or physical fact we haven’t discovered that, if we knew it, would make it very clear why Bostrom’s argument is wrong or missing something crucial.
- Some version of good old-fashioned radical skepticism is true, we don’t know anything, and Bostrom’s argument is a collection of senseless markings by a creature under the illusion of “rationality” (and, yes, so it this post, kjhskhja).
You should object at this point: well, yes, we can’t utterly disprove any number of radically skeptical hypotheses, but that can’t be counted against an argument. Should every argument qualify that the skeptical hypotheses are a possibility?
The principle of proportional epistemic humility
Obviously, no, we needn’t always stop to point out “or maybe we don’t know anything”. But consider the following scenario:
Imagine you’re a mathematician. While working on whatever it is mathematicians work on, you stumble upon a proof that 2+2=5. You obivously check, double check, triple check, quintuple check. You bring your findings to colleagues, then to the mathematical community at large. No one can find the error. What’s going on?
Well, maybe 2 + 2 does = 5. But you’d be kinda crazy not to at least consider some other possibilities: that everyone is missing your error; that you may have lost your mind; that you’re hallucinating; that this is a dream; that someone is playing a trick on you; that there are fundamental and deep errors at the very foundations of our assumptions about proof and mathematical truth; etc.
In fact, we don’t need to imagine such a silly hypothesis. Many of us have really had the experience of correctly inferring that, since what’s happening is so ridiculous, we must be dreaming.
The point is this: it is always possible that we’re wrong. The skeptical doubts – ranging from “what if I’ve made a mistake I’ve failed to notice?” to “what if Pyrrho himself is controlling my mind from beyond the grave?” – can never be fully taken off the table. They’re mostly remote enough that we need not think about them much. But, whenever we seem to discover strong evidence for a conclusion that seems wildly counterintuitive, it’s natural to consult at least the more plausible of the skeptical scenarios.
There’s a difference between acknowledging that an argument is strong and actually believing the conclusion of that argument. The gap sometimes lies in how, well, believable the argument is.
It’s worth adding that the skeptical route is especially tempting for a statistical argument like Bostrom’s. After all, compared to the likelihood of the options he lays out, how unlikely is it that his argument is wrong and we’re failing to see it? It’s not an unprecedented state of affairs, to say the least.
The simulation argument, as far as I can tell, is very strong. But it only tells us that one of three possibilities is true. The first of them, taken broadly, is actually not all that unlikely. But even if the first two are, strictly speaking, wrong, I’m not convinced the third option follows. Why? Well, it’s really, really wacky. And when we get that wacky, I can’t help but start considering other wacky possibilities as well.