13 existential risks that have the potential to bring an end to humanity

earth comets asteroid impacts water illustration shutterstock_155504771
earth comets asteroid impacts water illustration shutterstock_155504771

Shutterstock/NASA

An illustration of comets and asteroids striking planet Earth.

Human extinction is something that most people probably try not to dwell too much on, but a team at Cambridge University are devoting their lives to studying the matter.

Academics, lawyers, scholars, and philosophers at the university’s Centre for the Study of Existential Risk (CSER) are specifically focusing on how best to mitigate risks that could lead to human extinction or civilisational collapse.

Haydn Belfield, an academic project manager at CSER, told Business Insider that he and his peers are looking at risks associated with artificial intelligence, biotechnology, the environment, and to a lesser degree, natural risks.

“These are ‘Global Catastrophic Risks’ — defined as those that could kill at least 10% of the global population, or cause equivalent damage,” he told Business Insider.

Here’s a roundup of the areas they’re focusing on:

An asteroid hitting Earth

The Earth is constantly being “hit” by asteroids but most of them are harmless as they’re too small and they get broken up into dust or tiny grains of sand as they pass through our atmosphere. However, every now and again, a larger asteroid will collide with our planet.

Asteroids with a diameter in excess of 100m come into contact with Earth roughly every thousand years or so, while asteroids with a diameter in excess of 10km hit the Earth every 100 million years, according to Wired. Smaller asteroids with a diameter of less than 1m hit the Earth every year. 

Type of risk: Natural (CSER is less concerned about these)

A supervolcano

Quietly lurking beneath Yellowstone National Park in the US is a “supervolcano” that has the potential to wipe out humans when it erupts.

There are several other “supervolcanoes” — volcanoes capable of an eruption that produces more than 240 cubic miles of magma — around the world that scientists and volcanologists are constantly watching, including the one at Lake Toba in Indonesia.

There have been three major volcanic eruptions at Yellowstone in the last 2 million years, according to the National Park Service (NPS), and two of them can be considered supervolcano events. The NPS writes on its website that another supervolcano eruption is possible but it is very unlikely in the next thousand or even 10,000 years.

The vast quantities of lava and ash that are spewed out of the Earth during supervolcano events have the potential to cause long-lasting climate change that could trigger a life-threatening ice age or global warming.

Type of risk: Natural (CSER is less concerned about these)

Solar flares

Explosions on the surface of the Sun can result in solar flares that have an impact on Earth. Every now and again, these solar flares are big enough to trigger geomagnetic storms that disable satellites and take out terrestrial power grids.

There is a one in eight chance of a catastrophic solar flare striking Earth by the year 2020, according to Wired.

Type of risk: Natural (CSER is less concerned about these)

Catastrophic climate change

Rising global temperatures could have potentially deadly knock-on effects for humans.

Nonlinear climate change happens when complex climate systems rapidly transition to a different “mode” on a scale that humanity and other natural systems can’t adapt to.

Type of risk: Environmental

Ecosystem collapse

The Earth’s ecosystems only flourish when a number of variables come together in just the right way. When one of more of these variables are disrupted by a major event, there is potential for ecosystem collapse, which reduces an ecosystem’s ability to support life and can even result in mass extinctions. Resilient ecosystems can, however, rebound if the right conditions are present.

Type of risk: Environmental

Geoengineering accident

Geoengineering is the artificial modification of the Earth’s climate systems, often in a bid to remove carbon dioxide from the atmosphere and slow down global warming.

One way of doing this is by using large machines that can remove carbon dioxide directly from the air and store it elsewhere. Humans have also looked at seeding clouds to change the amount or type of precipitation that falls from clouds.

“Playing God” and attempting to modify the world’s weather patterns could have potentially devastating impacts, according to researchers at the Helmholtz Centre for Ocean Research Kiel, Germany, who published their findings in Nature Communications.

They modelled five potential geoengineering techniques (afforestation, artificial ocean upwelling, ocean iron fertilisation, ocean alkalinisation and solar radiation management) and concluded that geoengineering could add chaos to complex and not fully understood weather systems.

Type of risk: Environmental

Nuclear winter

A nuclear winter could occur if the world were to ever to end up in an all out nuclear war. Launching dozens of nuclear warheads would produce enough soot to entomb the planet in billowing, black clouds of ash that would plunge the planet into darkness and cause global temperatures to plummet. Those that weren’t killed by the nuclear weapons themselves would end up dying during the nuclear winter.

Type of risk: Environmental

Natural pandemic

A pandemic is a global disease outbreak that can be rapidly spread through human populations across continents and around the world. Examples include smallpox, HIV, tuberculosis, and the Black Death, which killed over 75 million people in 1350, or between 13% and 17% of the global population at that time.

Today, the widespread availability of long distance air travel means that diseases can travel faster than ever before.

Type of risk: Biotech

Genetically engineered pandemic

A genetically engineered virus could kill tens of millions of people, Microsoft’s billionaire cofounder Bill Gates warned during a panel at the Munich Security Conference in February.

“I am concerned about biological tools that could be used by a bioterrorist,” Gates wrote in 2015 in the New England Journal of Medicine. He also told The Telegraph: “It would be relatively easy to engineer a new flu strain” by combining two versions: one that spreads quickly and one that kills quickly.

Gates added that genetically-engineered viruses have the potential to kill more people than nuclear weapons and that they’re easier to make.

Type of risk: Biotech

Genetically engineered crop or livestock disease

Man-made diseases with the potential to wipe out humanity’s key food sources (crops and livestock) could also pose a threat to human populations. With millions of people around the world already on the brink of starvation as a result of famine, a genetically engineered disease that takes away swathes of food is the last thing humanity needs.

Type of risk: Biotech

Non-beneficial ‘narrow’ AI

Narrow AI, also known as weak AI, are pieces of software designed to learn and focus on one particular task. One example is the AlphaGo algorithm, which has been programmed by researchers at Google-owned DeepMind to play and master the Chinese board game Go. Another example is Apple’s personal assistant Siri.

“Weak, or ‘narrow’ AI, is a present-day reality,” wrote Ryan Calo, of the Center for Internet and Society at Stanford Law School, in 2011. “Software controls many facets of daily life and, in some cases, this control presents real issues. One example is the May 2010 ‘flash crash’ that caused a temporary but enormous dip in the market.”

Type of risk: Artificial intelligence

Non-beneficial superintelligence

Highly advanced self-thinking machines that are terrifyingly described in Nick Bostrom’s “Superintelligence” book are another cause for concern. If these superintelligent computers outsmart humans and decide they no longer need us then we could be in trouble.

“Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent,” according to The Future of Life Institute.

If poorly programmed superintelligent machines wanted to, they could quickly develop sophisticated weapons of all shapes and sizes that could spell the end of humanity.

Type of risk: Artificial intelligence

Unknowns

There are, of course, the unknown risks that we haven’t even thought of yet. Who knows what they are.

Type of risk: Everything

The post 13 existential risks that have the potential to bring an end to humanity appeared first on Business Insider.