How to Avoid Human Extinction

I want to talk about the very long term future of the human race. Most people only think about what the future might be like for themselves and their children’s generation. But the fact of the matter is that, if things go well for humanity over the next one or two centuries, there isn’t really anything stopping our civilization from thriving for billions of years into the future.

Yes, our Sun will die out in a few billion years and engulf the Earth, but before then we will have plenty of time to figure out interstellar travel and find a new home for Earth’s inhabitants. Even after the last star dies out, our distant descendants could power their civilization using black holes. If there is any chance at all that we could build a happy, thriving civilization that could take advantage of these vast, cosmological expanses of time, we had better start figuring out how.

The emergence of existential risk

BostromNow that I’ve got you thinking about the far future, it’s time to think about how human civilization could go wrong in ways that stop us from reaching our potential. The philosopher Nick Bostrom has introduced the concept of existential risk to refer to any threat that would either lead to the extinction of humanity, or would permanently and drastically curtail our potential as a species. While we’re not used to thinking about it, existential risk is really, really important. Actually, almost by definition, it’s by far the most important thing anyone could ever worry about.

For most of human history, we didn’t have to worry about existential risk. Of course, there has always been some small risk that a natural event (like an asteroid impact) could cause humanity to go extinct. But for most of human history, we simply didn’t have the technology necessary to destroy ourselves as a species. And our communication, coercion, and surveillance technology wasn’t good enough to allow any person or group to enforce a dystopian social system on the rest of humanity indefinitely.

As our technology has improved, however, we have had to face the specter of human-caused existential disaster. We can never put this genie back in its bottle. For every year that our civilization continues to exist, there is some small, but nonzero probability that we will destroy ourselves one way or another. Over a decade or a century, this probability is compounded ten or a hundred times over. If we want our civilization to survive for billions of years, we will have to make the probability of catastrophe vanishingly small, and keep it that way.

Nuclear holocaust

TrinityScientists and policymakers first began to worry about human extinction with the advent of nuclear weapons. Soon after July 1945, when the United States army detonated its first nuclear weapon, scientists raised serious concerns that this technology would enable wars of destruction and death on a scale never before seen in human history. And when the USSR carried out its first nuclear test in 1949, this risk became very real. There were now two hostile powers on Earth that each had the capacity to initiate nuclear war. So far, humanity has been very lucky. We’ve narrowly escaped catastrophe on several occasions— during the Cuban Missile Crisis for example, President Kennedy reckoned that the probability of war was “between 1 in 3 and even.”

While tensions between nuclear powers aren’t nearly as high now as they were during the Cold War, nuclear war remains a real possibility as long as there are multiple competing states with large nuclear arsenals. The only real long term solution is to concentrate all the world’s nuclear weapons in the hands of some transparent, democratic global institution like the United Nations. This way, the incentive for arms races would be eliminated.

Synthetic biology

While the results of nuclear war would be truly catastrophic, it’s not actually clear that the most likely outcome of such a war would be human extinction. There are many ways in which small numbers of humans could survive the nuclear winter and gradually re-establish civilization. Synthetic biology, on the other hand, presents a more serious scenario for the total annihilation of humanity.

Using genetic engineering techniques, governments as well as terrorist groups will be able to design ultra-deadly, highly communicable viruses and release them into the ecosystem, starting a global pandemic. This scenario is all the more worrying because the expertise and equipment needed to design such a virus will likely not be very great. Already, you can buy all the equipment needed for CRISPR-Cas9 gene editing online for $159, and middle schoolers are using the technology in their science classrooms. It will be impossible for governments to keep these technologies away from bad actors.

The solution here is to fight fire with fire. States will need to develop rapid-response systems that can develop and distribute vaccines to protect against synthetic pathogens in a matter of days. Since it’s virtually impossible to stop the spread of pathogens across national borders, these protective measures will be much more effective if they are implemented at the global, rather than national level.

Nanotechnology

An even more serious threat than synthetic biology is nanotechnology. Nanotechology is the ability to precisely manipulate atoms and molecules in order to build nano-sized machines on a mass scale. Nanomachines have enormous promise: they could ultimately be used to clean up the environment, roam our bloodstreams to protect against disease, create ultra-powerful computers, and much more. But they are also incredibly dangerous, especially if they become self-replicating. Governments could deploy swarms of self-replicating nanomachines as deadly weapons, capable of killing millions and wreaking havoc on ecosystems and infrastructure. As nanotechnology becomes cheaper and more widely available, rogue actors could also use it to inflict tremendous harm. As the nanotechnologist Eric Drexler writes:

“Early assembler-based replicators could beat the most advanced modern organisms. ‘Plants’ with ‘leaves’ no more efficient than today’s solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough, omnivorous ‘bacteria’ could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop — at least if we made no preparation. We have trouble enough controlling viruses and fruit flies.”

Self-replicating nanotechnology therefore qualifies as an existential risk— the nightmare scenario would lead to the annihilation of the human race. The solution is to develop a rapid-response “nanotechnological immune system” that could use satellites to detect swarms of dangerous nanomachines and neutralize them before they become too powerful. This immune system would need to be global in order to be effective— if any region isn’t sufficiently protected, the problem could get out of control before other governments can react. It’s also important to concentrate offensive nanotechnological capabilities in the hands of a global institution because, without this, there will be a very strong incentive for states to engage in deadly arms races that could lead to war.

Artificial intelligence

The dangers of artificial intelligence have been getting a lot more media attention in the past few years, and for good reason. In a recent survey of AI researchers, it was found that most experts in the field agree that there is at least a 70% chance that artificial intelligences will exceed human abilities in all domains before the year 2100. This means that superintelligences— AIs that dramatically outperform humans in all domains— could very well become a serious threat during this century. The problem with superintelligence is that, once these systems become sufficiently powerful, they will effectively replace human beings as the dominant life form on Earth. It will be immensely important for us to develop the capability to ensure that these superintelligences have value systems that are aligned with our own. This is referred to as the goal alignment problem.

The goal alignment problem is all the more worrying when you consider the fact that a superintelligence would, by definition, be better than human beings at AI development. This could lead to a recursive self-improvement loop, where the system modifies itself to make itself more intelligent, thereby making it more capable of making further improvements to itself, and so on, many times over. This scenario is referred to as an intelligence explosion. If this seems like an implausible idea, we should consider that an AGI would be able to copy itself onto millions of computers over the Internet, thereby increasing its raw computational power by several orders of magnitude in a matter of days. Such a system would have all of human knowledge at its disposal, and it would have the processing power to understand it all, find patterns in the chaos, and make plans based on its findings. It would be nearly unstoppable.

Whichever organization kicks off an intelligence explosion first would quickly open up a very large lead over other research teams. The superintelligence would therefore have no peer competitors to keep it in check. It could use manipulation, coercion, and advanced technologies to shape the future of humanity in accordance with its preferences, which may or may not be the same as those who designed it. If it becomes widely known that artificial general intelligence is just around the corner, corporations and states might be motivated to engage in an arms race to become the first organization to start an intelligence explosion. The stakes involved would be astronomically large: indefinite world domination. Such an arms race could also lead to pre-emptive war in an effort to delay the research progress of rivals.

The solution here is global political integration and public oversight of artificial intelligence research. Governments should start investing public research funds into the problem of AI goal alignment. Ideally, public research into AI should be done at a global level, to reduce the incentive for arms races. AI experts disagree about the likelihood of an intelligence explosion, but we had better be prepared for the worst case scenario. If such an explosion does occur, it needs to happen under the careful oversight of a transparent, democratic, and benevolent international organization. That way, we can ensure that the immense benefits of superintelligence are shared with all of humanity.

Climate change

The climate crisis will almost certainly not lead to complete human extinction, but it is nevertheless a very serious problem, and one that requires a coordinated global response. This will be especially true if geoengineering— the deliberate engineering of the environment to counteract climate change— becomes necessary. Governments might unilaterally embark on their own efforts to change the composition of the atmosphere, starting feuds that could quickly lead to war. Global political integration would allow for binding international emissions regulations, and coordinated global investment in renewable energies. It seems unlikely that the climate crisis can be solved without much more political integration than we now have.

How to avoid catastrophe: a political solution

united-nations.pngWe’ve seen that every kind of existential risk we face could be mitigated much more effectively with global political integration. What we need is a democratic United Nations with real teeth; a world state that could put an end to arms races and take steps to protect all humanity. As long as there is fragmentation and anarchy at the international level, our species will not be able to survive for the long term. Humanity needs to be united, it needs a single voice.

But we will have to avoid the pitfalls that have plagued regional attempts at political integration, like the European Union. Europe is in severe crisis right now because it attempted economic integration (free trade and a single currency) before implementing political integration (a central government with the power to tax and spend). This model can only lead to a race to the bottom, and it won’t do anything to address the very real existential risks that our species will face this century. Neoliberal free trade deals are not what we need— we need a democratic world state, empowered to take bold action on the most pressing issues of our time.

Integration will be a gradual process, and it will require bold political leadership in the rich countries in order to ensure it happens. Nations will need to be prepared to sacrifice some of their sovereignty in exchange for security. As the effects of climate change continue to compound, we can be hopeful that there will be some movement in this direction. None of this will happen automatically, however. The political Left in particular has a duty to make global political integration one of its long-term priorities. We should begin to argue for integration on security grounds: climate change, nuclear weapons, and emerging technologies are all serious threats to public safety, and they can only be tackled at the international level. Once established, the world state could implement worker-friendly policies and set global labor standards, since corporations will have no where else to go. Unlike individual nation-states, it will not have to implement austerity in order to achieve “competitiveness.”

The specter of totalitarianism

Critics will argue that, by ending the competition between states, global political integration would open the door for a global totalitarianism. The concern is that if a power-hungry demagogue were ever elected as the global head of state, they could quickly consolidate power, ending democratic elections and establishing a global autocracy from which there would be no appeal. This is clearly a concern that should not be taken lightly.

Big Brother.jpgThe problem is that totalitarianism will increasingly become a threat in the future, with or without a world state. Currently, elected leaders in parliamentary democracies don’t usually become dictators because they know that the bureaucracy, the police, and the military won’t follow orders that are clearly unconstitutional or illegal. But as more and more of the military is automated and replaced with autonomous weapons, there is a real risk that power-hungry leaders could ignore the rule of law and use their totally obedient “droid army” to coerce everyone into following their commands. If autonomous weapons and modern surveillance technology were used to enforce a global, indefinitely stable totalitarianism, this itself would qualify as an existential catastrophe, arguably no better than extinction.

There are technical and institutional solutions to this problem, but we will have to be proactive in implementing the proper security protocols. Autonomous weapons systems should be designed to require the approval of many different state officials in order to be fully deployed, so as to ensure that one president or rogue general couldn’t use them to carry out a one-man coup d’état. Once we develop the right security protocols, we will be able to use them to protect against despotism both at the national and international levels. Global political integration won’t make the risk any more serious than it already is. In fact, a world state could actually be our greatest defense against regional totalitarianism, allowing us to ensure that civil liberties and democratic elections are protected in all member states.

Grow or die: the need for space colonization

Once we’ve established a well-intentioned, democratic world state, we can start planning to hunker down for the long haul. We will need to reduce the risk of species-wide catastrophe to negligible levels— and the best way to do that is to become a multi-planetary species.

Right now, if a catastrophe occurs on Earth, there is no other world that humanity can turn to. Since the catastrophes we’ve discussed are likely to happen suddenly, without advance warning, there’s no possibility that a self-sufficient colony could be established on the Moon or Mars in order to save the human race. This is why it’s imperative for our species to establish a self-sufficient presence on another world— it would give us a back-up if anything goes horribly wrong on Earth. And our first destination should be Mars.

While there has been much fanfare in recent years about Elon Musk’s successful forays into private space travel, it is very important that the first colonies on Mars are established by governments, not corporations. This is the only way to ensure that Mars is a new frontier open to all humanity, not a playground for billionaires. And to the greatest extent possible, Mars colonization should be undertaken by global coalitions of governments, not individual states. We will need to minimize the tendency for nation-states to fight over Martian territory and resources.

Colonised MarsOver time, it is inevitable that Martian society will start to assert its political independence from Earth— the communication and travel delays are simply too great to maintain a strong centralized state encompassing both Earth and Mars. But this need not be a bad thing. As long as Earth and Mars each have strong, democratic, planetary governments that can keep advanced technologies under control in their own jurisdictions, there will be little to worry about. The long distances between planets (let alone between star systems) will strongly discourage war. And if war does break out, no one planet will be strong enough to annihilate or conquer all the others. Once humanity spreads out across the galaxy, the species will truly be secure. The vast distances of space will ensure that humanity will once again be unable to destroy itself— even if it wanted to.

Why Automation Will Kill Capitalism Forever

In the past few years, journalists, scientists, and tech CEOs alike have begun to sound the alarm about the disruptive effects that upcoming advancements in robotics and artificial intelligence will have on the job market. The introduction of self-driving cars alone will result in over 4 million job losses over the next two decades, as truckers and bus drivers are replaced by autonomous vehicles. Robots and computer kiosks are already replacing jobs in food service and retail, and machine learning algorithms are even starting to replace skilled white-collar jobs, like accountants, middle managers, and programmers. When it comes to automation, there really is no place to hide.

Automation in historical context

Of course, automation isn’t a new phenomenon— technology has led to dramatic job losses before, particularly in industries like agriculture and manufacturing. We can roughly group the history of automation into three major “waves.”

Farm JobsFirst, the Industrial Revolution led to a precipitous decline in the share of the population working on farms. For all of world history up until the 18th century, well over half the workforce was directly employed in food production. But the introduction of machinery into agriculture dramatically increased productivity, freeing up farm laborers to work in manufacturing.

The next wave came after World War II. Technological advances greatly increased the productivity of factories, which freed up industrial workers to work in new service jobs. These service jobs largely consist of mental labor, such as reading, writing, and planning; interpersonal labor, such as interacting with customers; and light physical labor that requires dexterity, such as preparing food.

The problem is that services can be automated, too. Contemporary advances in robotics and artificial intelligence are taking aim at just those skills which service jobs require: planning and pattern recognition, interacting with humans, and manipulating objects in complex and changing environments. This really will be the final wave of automation. Once machines comes to dominate the service sector, humans simply won’t have any useful skills left that can’t be done more efficiently and more cheaply by machines.

Now, many reasonable people want to hold onto the idea that humans are irreplaceable. Can machines really become as creative, intelligent, and sophisticated as human beings?

The answer to this question is yes. Science tells us that at the end of the day, humans are machines. We’re immensely complex, fleshy, biological machines, but we are machines nonetheless. There’s nothing a human can do that can’t ultimately be replicated by a machine, given enough engineering and research effort. It’s precisely the profound intelligence and creativity of human beings that allows us to understand the secrets behind our own capabilities, and design machines that can surpass us in many ways.

Peak automation

While machines will likely become more capable than humans in all domains by the end of this century, we won’t have to wait that long to see immense disruptions in the job market and society as a whole due to automation. So far, whenever automation has led to job losses in one sector, markets have adjusted by introducing new jobs in another sector. The problem is that we know this pattern cannot continue indefinitely. There will be a point at which the further introduction of automation technology will result in long-term net job losses for the economy as a whole.

We can call this point “peak automation.” Firms will lay off workers, and many of these workers will simply find that there is no employment to be had for them. All available job openings will require skills that they do not possess, and cannot afford to acquire. The long-term unemployed population will gradually increase, and this will in turn lead to a reduction in consumption spending and aggregate demand. Declining demand will prompt further layoffs, leading to further reductions in demand, in a downward spiral. Investor confidence will collapse, and a deep recession or depression will result.

As always, states will find that the best way to get the economy up and running again is to do Keynesian deficit spending. But fiscal stimulus alone will not solve this crisis. Unacceptably high levels of unemployment will be a recalcitrant feature of the new economy, because unskilled laborers simply will not be needed in large numbers, and there will be diminishing returns on productivity gains from adding additional skilled workers. Bringing the economy to full capacity will require more substantial state interventions into the market. This will include state-mandated reductions in the working week, expanded social programs to prop up demand, and tuition-free higher education and job training programs.

Of course, many states will be reluctant to take such left-wing measures to address the crisis— the wealthy will certainly lobby strongly against them. But countries that take a more left-wing approach will tend to economically outperform those that do nothing. And pressure from the electorate will become intense as more and more workers are laid off.

The unemployable population

As automation continues, states will face competitive pressure to keep as much of their population as possible gainfully employed in those jobs that remain— research scientists, engineers, and managers. Demands of efficiency will favor the nationalization of the most thoroughly automated industries. Higher education will take up a very large portion of GDP, as the government tries to funnel as many workers as possible into STEM-related fields. But we must recognize that not everyone is cut out for or interested in becoming a scientist, an engineer, or a manager. The state will have to figure out what to do with the rest of the population— the people who no longer need to work.

Of course, as decent, reasonable people, we would all like to make sure that the unemployable population has all of its basic needs taken care of, rather than being left to starve on the streets. Luckily there will be a strong economic rationale for the state to do the right thing here, since it will be necessary to prop up consumer demand. We can imagine, however, that some states might opt for a much darker solution to this problem.

There is an uncomfortable truth here, though. As long as it’s necessary for some skilled workers to be employed, these workers will need to be given privileges or advantages over the unemployable population in order to incentivize them to work. We cannot count on the idea that pure altruism or a sense of national duty will sufficiently motivate the scientists, engineers, and managers of the future. This means that a new kind of class division might emerge, between those who work and are given privileges for doing so, and those who live off of the state. A caring, left-wing government should do its best to minimize the inequality between these classes and facilitate a high degree of mobility between them— while working to accelerate progress in automation in order to hasten the end of class divisions once and for all.

Why a UBI isn’t good enough

It’s striking to note that even many of the most wealthy businesspeople in the world, like Elon Musk and Richard Branson, have recognized that a radical change in the economy will be necessary in order to adapt to the next wave of automation. The most popular policy prescription is a universal basic income, a government program which would provide a livable income to every citizen regardless of employment or financial means. Most of these pro-UBI billionaires hope that this policy would allow markets and private ownership of capital to continue indefinitely— effectively a bandaid solution to adapt capitalism to an increasingly jobless world.

But there are good reasons to believe that capitalism and a UBI can’t coexist for long. Such an arrangement would likely lead to a great amount of civil unrest and social instability. Class divisions would be made much more obvious and grotesque in such a scenario, and the unemployable majority would look at their trillionaire overlords with envy and disgust. It would quickly become clear to everyone that the owners of capital are not providing anything of use to society, and are simply extracting rents at the expense of the general population.

The capitalists, on the other hand, wouldn’t approve of being taxed at high rates in order to give handouts to the unemployable. They would prefer a situation in which the wealthy could simply trade amongst themselves. Even today, most of the tech entrepreneurs who are speaking out in favor of a UBI don’t want to fund it by raising taxes on themselves— they’re advocating to replace the entire existing social safety net with a meager cash payment.

But the wealthy cannot hold onto power forever. The rich may seem invincible now, but they only have power so long as the state continues to enforce their claims on property. If the institutions of parliamentary democracy and universal suffrage survive the turmoil, the masses will use them to wrest power away from big business. We will use state power to bring society’s resources and machinery into public ownership, so that they can be managed democratically to further the interests of all humanity. Everyone will be provided everything they need— not just to live, but to thrive and pursue their dreams and passions in a world of freedom and abundance. This utopian, Star Trek-like future is called democratic socialism. It is the stage in history when humanity will finally grow out of its infancy.