Nick Bostrom is an Oxford philosopher known for work on ‘anthropic reasoning‘, warnings about the dangers of superintelligent AI, and the simulation argument (see my thoughts on the latter). He recently released a new working paper: ‘The Vulnerable World Hypothesis‘ that poses a strong argument for strengthening global state power. Anarchists and libertarians of all stripes should consider the argument and address it, as it constitutes a serious challenge to their program.
In the paper, Bostrom argues as follows: think of human technological development as an urn filled with balls. Most balls are white: these are mostly beneficial, or at least harmless, technological developments. A few are gray: they’re dangerous and have potentially catastrophic consequences, but either act on a long enough timeline that it’s possible to prevent these consequences, or are otherwise containable (fossil fuels and nuclear weapons might both go under this category). Presumably, there are some black balls. These are the sort that, if anyone discovered this technology, it is almost certain that humanity would suffer a catastrophic, possibly species-annihilating, event within a very short span of time, unless it were possible to very quickly and effectively contain it.
Bostrom elucidates the black ball possibility vividly: we had no reason to assume that something like nuclear power, if it were possible, should be easy or difficult to recreate. Had it turned out that nukes were fairly easy to make in your own basement, we might not be around right now to talk about it.
To be sure, black balls are a very small minority in the urn. But it’s also true that we’re pulling balls out with greater and greater frequency as time goes on. Assuming there are any in there, it seems almost inevitable that we’ll pull one out sooner or later. (The most currently talked-about black ball possibility is the superintelligent AI apocalypse I alluded to earlier.)
The annihilation of the species is one of the worst things that could possibly happen. Almost any manner of temporary or limited suffering would seem to be worthwhile, if it sufficiently diminished the possibility of a terrible technology wiping us all out. From this, Bostrom argues against what he calls the ‘semi-anarchic default condition’, characterized by all three of the following (quoting from the paper):
(A) Limited capacity for preventive policing. States do not have sufficiently reliable means of real-time surveillance and interception to make it virtually impossible for any individual or small group within their territory to carry out illegal actions—particularly actions that are very strongly disfavored by >99% of the population.
(B) Limited capacity for global governance. There is no reliable mechanism for solving global coordination problems and protecting global commons—particularly in high-stakes situations where vital national security interests are involved.
(C) Diverse motivations. There is a wide and recognizably human distribution of motives represented by a large population of actors (at both the individual and state level)—in particular, there are many actors motivated, to a substantial degree, by perceived self-interest (e.g. money, power, status, comfort and convenience) and there are some actors (“the apocalyptic residual”) who would act in ways that destroy civilization even at high cost to themselves.
It’s not a stretch to see where he’s going. If we’re going to protect against the end of the species, we’re going to have to give up on one of (A) – (C). As you might imagine, this would generally involve exploring possibilities that are the very antithesis of the anarchist and libertarian programs.
The paper is nuanced and well-argued. I won’t describe it any further here, but urge interested people—especially anarchists and libertarians—to read it for themselves.
It may be thought that this isn’t a new problem for anarchism. After all, national defense has long been considered the ‘hard problem’ among some anarchists. And anarchists are used to hearing ‘but what about nukes?’ as a challenge.
Of course, it’s called the ‘hard problem’ for a good reason: anarchists and libertarians don’t have very good answers for it. But among the plausible proposed solutions are: (1) without nation-states, there won’t be the motivation to attack a territory with devastating, resource-destroying weapons; (2) Mutually Assured Destruction will prevent the few agents with the resources to create these sorts of dangerous weapons from unleashing them; and (3) distributed, polycentric governance mechanisms are more effective than monopolistic legal enforcement.
It’s not obvious that (1) – (3) work quite as well for the scenarios Bostrom has in mind as they do for the question of countries attacking territories with nukes. (1) is irrelevant, since the issue here isn’t one of countries doing anything. In fact, there doesn’t have to be any agent with nefarious purposes. In the usual scenarios discussed around AI-danger, it’s usually self-interested individuals or firms trying to win the AI-race that inadvertently set off the catastrophic chain of events. (2) also does not apply, since part of the danger under discussion is that the feared technology will be by nature cheap and easily accessible. And while I agree with the general claim made by (3), it’s not clear that this remains true when it comes to the special sorts of circumstances under discussion. Bostrom makes a compelling case that, in order to contain a possibly species-destroying but easily accessible technology from being unleashed, either mass surveillance or mass centralized governance would need to be in place.
The danger here is also larger: it isn’t just a matter of one territory or another being attacked, but of the entire species potentially being wiped out.
I’m not quite convinced that Bostrom’s argument is right. It may just be that, even for these sorts of scenarios, polycentric governance is more likely to be effective than centralized government. To his credit, Bostrom doesn’t claim to be sure either. But he makes a compelling case for the matter to be discussed, and on that I could not agree more. Calling all anarchists and libertarians: what say you?