Blog
Explainer

What Is AI Extinction Risk?

AI extinction risk isn’t science fiction — it’s a growing concern among leading researchers. Learn what’s at stake, why we have only a narrow window to act, and the practical steps we can take now to safeguard humanity’s future.

Written by
Lindsay Langenhoven
on
Aug 1, 2025

AI extinction risk is the danger that artificial intelligence (AI) could cause a disaster severe enough to wipe out humanity. For example, AI could trigger a fully automated nuclear strike or create a deadly pathogen.

AI’s abilities are improving fast. Today, these systems can already outperform humans in specific areas — beating us at strategy games, designing new drugs, writing computer code, and holding conversations that feel remarkably human. The next milestone developers are chasing is Artificial General Intelligence (AGI). These are machines that can learn and perform any task a human can.

But the race won’t stop there. Competitive pressures could push AI past AGI to superintelligence — systems far smarter than humans. Many experts warn that once we reach that stage, we may lose the ability to control them.

Combine that lack of control with the fact that AI doesn’t share our values or care about our goals, and we could be facing risks on a catastrophic or even extinction-level scale.

Why Experts Are Taking It Seriously

AI tools like ChatGPT, customer service chatbots, and the facial recognition feature on your phone are undeniably convenient. However, as we use them more, our dependency and trust in these technologies also grow — and that’s not always a good thing. This is especially true considering that AI companies like OpenAI and Google DeepMind are competing to create systems with intelligence far beyond that of today’s consumer tech.

They’re creating something more intelligent than us. And once AI surpasses human intelligence, “they will take control. They will make us irrelevant,” warns Geoffrey Hinton, one of the “godfathers” of AI. What’s more, Hinton has also stated that there’s a10% to 20% chance that AI will lead to human extinction within the next three decades.” 

Global AI safety and research organizations like the Center for AI Safety and Future of Life Institute have already been pressing urgently for improvements in AI safety for more than three years, issuing open letters and statements signed by top industry leaders, researchers, and academics including Bill Gates, Geoffry Hinton, and Elon Musk. 

To gauge where the industry stands right now, the Future of Life Institute publishes a yearly AI Safety Index — a scorecard assessing the risks top AI companies pose to humanity. In the latest version, the Existential Safety metric reveals that more than 70% of the companies score an F and the rest scored no higher than a D! 

How Could AI Become Dangerous?

As AI technology becomes more powerful, its potential to cause harm increases. Superintelligence — AI systems that are smarter than humans — could eventually self-improve without human help. Without human oversight, their capabilities could expand rapidly, often with little or no alignment to human values or goals.

Advanced AI might even develop goals that conflict with humanity’s. Here’s an example of how that could go wrong:

💡 Stock trading AI gone wrong: Imagine an AI designed to maximize stock market profits. Without proper safeguards, it might spread false news to crash competitors’ shares, exploit flaws in financial systems, or trigger automated sell-offs that destabilize the economy. It would technically achieve its “maximize profit” goal, but cause massive harm to society in the process.

Another way AI could quickly become more dangerous is through local and global competition to build the most powerful model before rivals do. In this AI race, safety measures are often deprioritized in the rush to win, raising the risk of catastrophic consequences.

According to the Centre for Future Generations, the following scenarios could lay the foundation for dangerous societal risks from AI:

  1. AI speeding up its own progress: AI already writes code and runs experiments. As it takes on more research, advances could shrink from years to months, potentially outpacing human control.

  2. No “business as usual”: Even current AI can reshape jobs and cultural systems. Major societal shifts are inevitable, and their disruption is often underestimated.

  3. Safety measures under strain: Current safeguards like human feedback may fail as AI becomes more capable. Competition between nations and companies only increases this risk.

  4. Power becoming concentrated: Control over AI systems, chips, and data centers could give a small group enormous economic and political power, threatening global security.

  5. Openness needs strong defenses: Open AI models can drive innovation but also enable misuse. Robust cybersecurity, economic stability, and trusted institutions are essential.

AI extinction risk is not science fiction. Many AI developers and researchers agree it is real. The probability may be small, but the consequences could mean the end of our civilization.

What Can We Do To Turn AI Extinction Risk Around?

The good news? There’s still time to act — but the clock is ticking. Experts are calling for practical steps like creating an international “off switch” for advanced AI, placing limits on the computing power used to train the largest models, and ensuring safety research outpaces AI development.

AI alignment efforts are also underway, aiming to teach these systems to follow human values and goals. If nations and companies work together, we can slow the race just enough to keep control — giving society time to adapt and put strong guardrails in place before the technology outruns us.

The future isn’t written yet — but if we stay silent, someone else will write it for us. Ultimately, everyone can help shape the future of humanity. 

Here are two ways to start today:

  • Demand safer AI from your elected officials in under a minute using this tool from the Center for AI Safety: Have your say
  • Sign the open letter to OpenAI — the leading frontier AI company — urging them to stay committed to building AI that benefits humanity, even as they shift from a nonprofit to a for-profit model: Sign now

Recommended resources:

Lindsay Langenhoven

Content Writer