Classifying Extinction Risks
by Lifeboat Foundation Scientific Advisory Board member Michael Anissimov.There are a number of techno-apocalypse classification schemes in existence.
Oxford philosopher Nick Bostrom’s paper, Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards, divides extinction risks into bangs, crunches, shreaks, and whimpers. These are, respectively: sudden disasters that kill everyone on the planet, scenarios where mankind survives but its potential to develop into posthumanity is thwarted, some form of posthumanity is attained but it is an extremely narrow band of what is possible and desirable, and dystopias where posthuman society develops but slowly decays into something undesirable. Various examples of each are given in the paper.
The problem with Bostrom’s classification scheme is that it locks people out of the discussion, because it revolves around controversial mentions of posthumanity. While it is relevant to transhumanists, non-transhumanists are alienated. Although Bostrom’s classification scheme is among the most complete and useful, it is also the least marketable. It will never spread very far beyond the transhumanist community.
Another classification scheme: the eschatological taxonomy by Jamais Cascio on Open the Future. His classification scheme has seven categories, one with two sub-categories. These are:
0. | Regional Catastrophe (examples: moderate-case global warming, minor asteroid impact, local thermonuclear war) |
1. | Human Die-Back (examples: extreme-case global warming, moderate asteroid impact, global thermonuclear war) |
2. | Civilization Extinction (examples: worst-case global warming, significant asteroid impact, early-era molecular nanotech warfare) |
3a. | Human Extinction-Engineered (examples: targeted nano-plague, engineered sterility absent radical life extension) |
3b. | Human Extinction-Natural (examples: major asteroid impact, methane clathrates melt) |
4. | Biosphere Extinction (examples: massive asteroid impact, “Iceball Earth” reemergence, late-era molecular nanotech warfare) |
5. | Planetary Extinction (examples: dwarf-planet-scale asteroid impact, nearby gamma-ray burst) |
X. | Planetary Elimination (example: post-Singularity beings disassemble planet to make computronium) |
This classification scheme received popular attention, appearing on BoingBoing. I can see how it would be friendlier and more interesting to people beyond transhumanists.
My one objection from a PR perspective would be that it contains slightly too many words and classifications to be passed along in a concise form. For the public to really become aware about the new extinction risks will require them to be boiled down and passed around as a sound bite, or nearly so.
One thing I do like about Jamais’ list is that it is sufficiently specific at each level such that the risks are named precisely, rather than in the abstract. For instance, instead of saying “risks from nanotechnology, AI, and robotics” a la Bill Joy, Jamais is saying, “engineered sterility absent radical life extension”, “late-era molecular nanotech warfare”, and so on. This precision is useful, because without it, most people have no clue what you’re talking about. For instance, when someone hears of “risks from advanced biotechnology”, they may be thinking of the biotech monsters from Resident Evil. Unless you say exactly what you mean, it’s liable to be misconstrued.
To condense down Jamais’ list, I propose tossing out all the natural risks: asteroid impacts and gamma ray bursts, basically. I propose only including natural risks in lists meant to be comprehensive, or make a point of including low-probability risks as well as high-probability risks. For example, the Lifeboat Foundation’s programs page includes entries for almost every risk imaginable, because the focus is comprehensiveness. For risk shortlists, I propose that they be tossed out. Why? Because the probability of these risks occurring is minuscule. An asteroid capable of killing everyone on Earth is only expected to come around every few dozen or hundred million years. We know that intense gamma ray bursts are rare because of our observations of them occurring elsewhere, plus evidence from the fossil record. If nature were tossing extinction disasters at life on a regular basis, it wouldn’t be here.
The risks emanating from nature have been around for billions of years, and we have no reason to expect they’ll become more probable in the next few hundred years. Meanwhile, as technology improves radically over that time frame, the risks of man-made apocalypse will increase. Because the man-made risks are so much more likely to begin with, and their likelihood and intensity are increasing with time, existential risk dialogue and preventive strategies should focus on the artificial risks.
Although Jamais’ classification scheme didn’t include any forms of terrorism as examples, I’d like to point out here that terrorism is not an existential risk. This was well-articulated by Tom McCabe a few days ago.
Looking at Jamais’ scheme again, I feel safe throwing out the first two categories, because they aren’t real risks to the species as a whole. Mentioning thermonuclear war is important for getting the gears turning in people’s heads, but ultimately, there is very little evidence that it could kill all 6 billion plus people on Earth.
I think class 2 risks (Civilization Extinction) is worth looking at, because most forces capable of wiping out civilization would probably be capable of wiping out the species in general also. I am very skeptical that even worst-case global warming would threaten civilization as a whole, and even if it could, this cause currently receives tens or hundreds of thousands more dollars and hours towards its mitigation than any of the other risks, so directing attention towards it has less marginal utility than possible alternatives.
In class 2, Jamais mentions early-stage nano-warfare. This is definitely a risk to civilization, although perhaps not the entire species. However, it’s more radical and hard-to-swallow than a couple other risks: genetically engineered viruses, genetically engineered bacteria, and synthetic biology. (These three domains blur.) Wrapping up all these things so far, we have as possible class 2 risks:
1. | man-made viruses |
2. | man-made bacteria |
3. | life with nonbiological components or nonstandard genetics (synthetic biology) |
4. | nano-warfare conducted with weapons made by Drexlerian-style nanomachines |
Notice how specific I choose to state the last one… this is to make it clear that weapons built using non-Drexlerian forms of nanotechnology are not at all in the same class as those built using it. As long as Drexlerian nanotechnology is not developed, we can safely say “4 is not a risk right now”, although preparation for the possible emergence of this risk could hardly hurt.
What else is there besides the above four? Three come to mind, and I have a feeling that they’re inherently less friendly to being understood by the public and policymakers as things stand today.
5. | any runaway self-replicating machine that is indigestible (likely to be based on Drexlerian nanotechnology) |
6. | recursive self-enhancement explosion by intelligence-enhanced human being |
7. | recursive self-enhancement explosion by mind upload |
8. | recursive self-enhancement explosion by artificial general intelligence |
The reasons why recursively self-enhancing intelligences are likely to be a threat to humanity if left unchecked was addressed well by Steve Omohundro at the recent Singularity Summit. Basically, acquiring resources is a convergent subgoal for arbitrary agents even if you don’t explicitly program it in, agents will start to display this behavior because it provably provides positive utility in the absence of specific injunctions against it. I’m putting in these recursive self-enhancement scenarios even though I don’t have space here to defend them at length, for the sake of completeness. You can ignore them if you’d like.
Now, in an attempt to squish these eight risks down even more, following are some short snappy titles I’ve come up with.
The Easier-to-Explain Existential Risks (remember an existential risk is something that can set humanity way back, not necessarily killing everyone):
1. | neoviruses |
2. | neobacteria |
3. | cybernetic biota |
4. | Drexlerian nanoweapons |
The hardest to explain is probably #4. My proposal here is that, if someone has never heard of the concept of existential risk, it’s easier to focus on these first four before even daring to mention the latter ones. But here they are anyway:
5. | runaway self-replicating machines (“grey goo” not recommended because this is too narrow of a term) |
6. | destructive takeoff initiated by intelligence-amplified human |
7. | destructive takeoff initiated by mind upload |
8. | destructive takeoff initiated by artificial intelligence |
I know these last ones are not as snappy as the first four, but I’m tossing out alternative titles for the possibility that they might be helpful.
So, this is a model I propose for informing people about existential risks: four easier-to-explain ones, and four harder-to-explain ones. Is it useful and sufficiently comprehensive?