I want to talk about the very long term future of the human race. Most people only think about what the future might be like for themselves and their children’s generation. But the fact of the matter is that, if things go well for humanity over the next one or two centuries, there isn’t really anything stopping our civilization from thriving for billions of years into the future.
Yes, our Sun will die out in a few billion years and engulf the Earth, but before then we will have plenty of time to figure out interstellar travel and find a new home for Earth’s inhabitants. Even after the last star dies out, our distant descendants could power their civilization using black holes. If there is any chance at all that we could build a happy, thriving civilization that could take advantage of these vast, cosmological expanses of time, we had better start figuring out how.
The emergence of existential risk
Now that I’ve got you thinking about the far future, it’s time to think about how human civilization could go wrong in ways that stop us from reaching our potential. The philosopher Nick Bostrom has introduced the concept of existential risk to refer to any threat that would either lead to the extinction of humanity, or would permanently and drastically curtail our potential as a species. While we’re not used to thinking about it, existential risk is really, really important. Actually, almost by definition, it’s by far the most important thing anyone could ever worry about.
For most of human history, we didn’t have to worry about existential risk. Of course, there has always been some small risk that a natural event (like an asteroid impact) could cause humanity to go extinct. But for most of human history, we simply didn’t have the technology necessary to destroy ourselves as a species. And our communication, coercion, and surveillance technology wasn’t good enough to allow any person or group to enforce a dystopian social system on the rest of humanity indefinitely.
As our technology has improved, however, we have had to face the specter of human-caused existential disaster. We can never put this genie back in its bottle. For every year that our civilization continues to exist, there is some small, but nonzero probability that we will destroy ourselves one way or another. Over a decade or a century, this probability is compounded ten or a hundred times over. If we want our civilization to survive for billions of years, we will have to make the probability of catastrophe vanishingly small, and keep it that way.
Scientists and policymakers first began to worry about human extinction with the advent of nuclear weapons. Soon after July 1945, when the United States army detonated its first nuclear weapon, scientists raised serious concerns that this technology would enable wars of destruction and death on a scale never before seen in human history. And when the USSR carried out its first nuclear test in 1949, this risk became very real. There were now two hostile powers on Earth that each had the capacity to initiate nuclear war. So far, humanity has been very lucky. We’ve narrowly escaped catastrophe on several occasions— during the Cuban Missile Crisis for example, President Kennedy reckoned that the probability of war was “between 1 in 3 and even.”
While tensions between nuclear powers aren’t nearly as high now as they were during the Cold War, nuclear war remains a real possibility as long as there are multiple competing states with large nuclear arsenals. The only real long term solution is to concentrate all the world’s nuclear weapons in the hands of some transparent, democratic global institution like the United Nations. This way, the incentive for arms races would be eliminated.
While the results of nuclear war would be truly catastrophic, it’s not actually clear that the most likely outcome of such a war would be human extinction. There are many ways in which small numbers of humans could survive the nuclear winter and gradually re-establish civilization. Synthetic biology, on the other hand, presents a more serious scenario for the total annihilation of humanity.
Using genetic engineering techniques, governments as well as terrorist groups will be able to design ultra-deadly, highly communicable viruses and release them into the ecosystem, starting a global pandemic. This scenario is all the more worrying because the expertise and equipment needed to design such a virus will likely not be very great. Already, you can buy all the equipment needed for CRISPR-Cas9 gene editing online for $159, and middle schoolers are using the technology in their science classrooms. It will be impossible for governments to keep these technologies away from bad actors.
The solution here is to fight fire with fire. States will need to develop rapid-response systems that can develop and distribute vaccines to protect against synthetic pathogens in a matter of days. Since it’s virtually impossible to stop the spread of pathogens across national borders, these protective measures will be much more effective if they are implemented at the global, rather than national level.
An even more serious threat than synthetic biology is nanotechnology. Nanotechology is the ability to precisely manipulate atoms and molecules in order to build nano-sized machines on a mass scale. Nanomachines have enormous promise: they could ultimately be used to clean up the environment, roam our bloodstreams to protect against disease, create ultra-powerful computers, and much more. But they are also incredibly dangerous, especially if they become self-replicating. Governments could deploy swarms of self-replicating nanomachines as deadly weapons, capable of killing millions and wreaking havoc on ecosystems and infrastructure. As nanotechnology becomes cheaper and more widely available, rogue actors could also use it to inflict tremendous harm. As the nanotechnologist Eric Drexler writes:
“Early assembler-based replicators could beat the most advanced modern organisms. ‘Plants’ with ‘leaves’ no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough, omnivorous ‘bacteria’ could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop — at least if we made no preparation. We have trouble enough controlling viruses and fruit flies.”
Self-replicating nanotechnology therefore qualifies as an existential risk— the nightmare scenario would lead to the annihilation of the human race. The solution is to develop a rapid-response “nanotechnological immune system” that could use satellites to detect swarms of dangerous nanomachines and neutralize them before they become too powerful. This immune system would need to be global in order to be effective— if any region isn’t sufficiently protected, the problem could get out of control before other governments can react. It’s also important to concentrate offensive nanotechnological capabilities in the hands of a global institution because, without this, there will be a very strong incentive for states to engage in deadly arms races that could lead to war.
The dangers of artificial intelligence have been getting a lot more media attention in the past few years, and for good reason. In a recent survey of AI researchers, it was found that most experts in the field agree that there is at least a 70% chance that artificial intelligences will exceed human abilities in all domains before the year 2100. This means that superintelligences— AIs that dramatically outperform humans in all domains— could very well become a serious threat during this century. The problem with superintelligence is that, once these systems become sufficiently powerful, they will effectively replace human beings as the dominant life form on Earth. It will be immensely important for us to develop the capability to ensure that these superintelligences have value systems that are aligned with our own. This is referred to as the goal alignment problem.
The goal alignment problem is all the more worrying when you consider the fact that a superintelligence would, by definition, be better than human beings at AI development. This could lead to a recursive self-improvement loop, where the system modifies itself to make itself more intelligent, thereby making it more capable of making further improvements to itself, and so on, many times over. This scenario is referred to as an intelligence explosion. If this seems like an implausible idea, we should consider that an AGI would be able to copy itself onto millions of computers over the Internet, thereby increasing its raw computational power by several orders of magnitude in a matter of days. Such a system would have all of human knowledge at its disposal, and it would have the processing power to understand it all, find patterns in the chaos, and make plans based on its findings. It would be nearly unstoppable.
Whichever organization kicks off an intelligence explosion first would quickly open up a very large lead over other research teams. The superintelligence would therefore have no peer competitors to keep it in check. It could use manipulation, coercion, and advanced technologies to shape the future of humanity in accordance with its preferences, which may or may not be the same as those who designed it. If it becomes widely known that artificial general intelligence is just around the corner, corporations and states might be motivated to engage in an arms race to become the first organization to start an intelligence explosion. The stakes involved would be astronomically large: indefinite world domination. Such an arms race could also lead to pre-emptive war in an effort to delay the research progress of rivals.
The solution here is global political integration and public oversight of artificial intelligence research. Governments should start investing public research funds into the problem of AI goal alignment. Ideally, public research into AI should be done at a global level, to reduce the incentive for arms races. AI experts disagree about the likelihood of an intelligence explosion, but we had better be prepared for the worst case scenario. If such an explosion does occur, it needs to happen under the careful oversight of a transparent, democratic, and benevolent international organization. That way, we can ensure that the immense benefits of superintelligence are shared with all of humanity.
The climate crisis will almost certainly not lead to complete human extinction, but it is nevertheless a very serious problem, and one that requires a coordinated global response. This will be especially true if geoengineering— the deliberate engineering of the environment to counteract climate change— becomes necessary. Governments might unilaterally embark on their own efforts to change the composition of the atmosphere, starting feuds that could quickly lead to war. Global political integration would allow for binding international emissions regulations, and coordinated global investment in renewable energies. It seems unlikely that the climate crisis can be solved without much more political integration than we now have.
How to avoid catastrophe: a political solution
We’ve seen that every kind of existential risk we face could be mitigated much more effectively with global political integration. What we need is a democratic United Nations with real teeth; a world state that could put an end to arms races and take steps to protect all humanity. As long as there is fragmentation and anarchy at the international level, our species will not be able to survive for the long term. Humanity needs to be united, it needs a single voice.
But we will have to avoid the pitfalls that have plagued regional attempts at political integration, like the European Union. Europe is in severe crisis right now because it attempted economic integration (free trade and a single currency) before implementing political integration (a central government with the power to tax and spend). This model can only lead to a race to the bottom, and it won’t do anything to address the very real existential risks that our species will face this century. Neoliberal free trade deals are not what we need— we need a democratic world state, empowered to take bold action on the most pressing issues of our time.
Integration will be a gradual process, and it will require bold political leadership in the rich countries in order to ensure it happens. Nations will need to be prepared to sacrifice some of their sovereignty in exchange for security. As the effects of climate change continue to compound, we can be hopeful that there will be some movement in this direction. None of this will happen automatically, however. The political Left in particular has a duty to make global political integration one of its long-term priorities. We should begin to argue for integration on security grounds: climate change, nuclear weapons, and emerging technologies are all serious threats to public safety, and they can only be tackled at the international level. Once established, the world state could implement worker-friendly policies and set global labor standards, since corporations will have no where else to go. Unlike individual nation-states, it will not have to implement austerity in order to achieve “competitiveness.”
The specter of totalitarianism
Critics will argue that, by ending the competition between states, global political integration would open the door for a global totalitarianism. The concern is that if a power-hungry demagogue were ever elected as the global head of state, they could quickly consolidate power, ending democratic elections and establishing a global autocracy from which there would be no appeal. This is clearly a concern that should not be taken lightly.
The problem is that totalitarianism will increasingly become a threat in the future, with or without a world state. Currently, elected leaders in parliamentary democracies don’t usually become dictators because they know that the bureaucracy, the police, and the military won’t follow orders that are clearly unconstitutional or illegal. But as more and more of the military is automated and replaced with autonomous weapons, there is a real risk that power-hungry leaders could ignore the rule of law and use their totally obedient “droid army” to coerce everyone into following their commands. If autonomous weapons and modern surveillance technology were used to enforce a global, indefinitely stable totalitarianism, this itself would qualify as an existential catastrophe, arguably no better than extinction.
There are technical and institutional solutions to this problem, but we will have to be proactive in implementing the proper security protocols. Autonomous weapons systems should be designed to require the approval of many different state officials in order to be fully deployed, so as to ensure that one president or rogue general couldn’t use them to carry out a one-man coup d’état. Once we develop the right security protocols, we will be able to use them to protect against despotism both at the national and international levels. Global political integration won’t make the risk any more serious than it already is. In fact, a world state could actually be our greatest defense against regional totalitarianism, allowing us to ensure that civil liberties and democratic elections are protected in all member states.
Grow or die: the need for space colonization
Once we’ve established a well-intentioned, democratic world state, we can start planning to hunker down for the long haul. We will need to reduce the risk of species-wide catastrophe to negligible levels— and the best way to do that is to become a multi-planetary species.
Right now, if a catastrophe occurs on Earth, there is no other world that humanity can turn to. Since the catastrophes we’ve discussed are likely to happen suddenly, without advance warning, there’s no possibility that a self-sufficient colony could be established on the Moon or Mars in order to save the human race. This is why it’s imperative for our species to establish a self-sufficient presence on another world— it would give us a back-up if anything goes horribly wrong on Earth. And our first destination should be Mars.
While there has been much fanfare in recent years about Elon Musk’s successful forays into private space travel, it is very important that the first colonies on Mars are established by governments, not corporations. This is the only way to ensure that Mars is a new frontier open to all humanity, not a playground for billionaires. And to the greatest extent possible, Mars colonization should be undertaken by global coalitions of governments, not individual states. We will need to minimize the tendency for nation-states to fight over Martian territory and resources.
Over time, it is inevitable that Martian society will start to assert its political independence from Earth— the communication and travel delays are simply too great to maintain a strong centralized state encompassing both Earth and Mars. But this need not be a bad thing. As long as Earth and Mars each have strong, democratic, planetary governments that can keep advanced technologies under control in their own jurisdictions, there will be little to worry about. The long distances between planets (let alone between star systems) will strongly discourage war. And if war does break out, no one planet will be strong enough to annihilate or conquer all the others. Once humanity spreads out across the galaxy, the species will truly be secure. The vast distances of space will ensure that humanity will once again be unable to destroy itself— even if it wanted to.