Blog
Explainer

Beyond the GPT Hype: Why public action is our best defense against AI extinction risk

GPT-5 sets new benchmarks, but its release highlights a bigger issue: a few companies are rapidly advancing toward AGI without adequate safety measures or public oversight.

Written by
Lindsay Langenhoven
on
Aug 8, 2025

Yes, OpenAI has released GPT-5 and the hype is REAL: live, loud, and repetitious. Hailed by its creators as the “smartest, fastest, and most useful model we've ever built,” what do its added performance and capabilities really mean for the world? Does “delivering state-of-the-art performance across coding, math, writing, health, and more, all in a single, unified experience” take us even closer to Artificial General Intelligence (AGI) or superintelligence—and a cliff edge for our civilization?

Ultimately, it’s about balancing power and risk. And who gets to decide how far and how fast we race toward that edge.

Some concerns hiding behind the hype
While everyday users enjoy the convenience of new AI capabilities and features, a handful of companies are quietly steering us toward systems as smart—or even smarter—than humans.

Moving closer to AGI or superintelligence isn’t just a tech milestone; it’s a pivotal moment in the history of humankind. It could reshape jobs, economies, national security… and our entire civilization.

To date, the public has had little to no say in how these breakthroughs are developed, deployed, or governed—a race with the very real potential to wipe us all out.

Are we really ready? 

Rapid advances in technology carry risks we’re not ready for. Not only do the companies developing these models openly admit they could kill us, but they’re making them stronger, with no focus on making them safer. What’s more, humanity’s future is being negotiated without humanity in the room.

While these developments can feel overwhelming, we are not without hope. Powerlessness is a feeling—not a fact. Everybody can have a say in our future and demand safer AI. AI that aligns with our values and benefits humankind!

There have been countless moments in history where public voices shaped the rules—from environmental protections to internet privacy laws. Steering the human + AI narrative can be one of those moments too. But we need to act before it’s too late.

Here’s how you can have your say in humanity’s future today:

  • For organizations: The AI Now Institute’s People’s AI Action Plan is a strong push for AI policies shaped by people, not just profit. The list of signatories includes dozens of civil society organizations, Nobel laureates, whistleblowers, and public figures, including Geoffrey Hinton, Sir Stephen Fry, Stuart Russell, and Max Tegmark. – Add your voice
  • For individuals:
    • Demand safer AI from your elected officials in under a minute with this tool from the @Center for AI Safety—before the technocracy outpaces our control. – Have your say
    • @The Midas Project and @Encode’s Open Letter to OpenAI demands transparency from @OpenAI as they quietly restructure from a nonprofit to for-profit model. Are they ditching their mission to build AI that benefits humanity? The list of signatories includes dozens of civil society organizations, Nobel laureates, whistleblowers, and public figures, including Geoffrey Hinton, Sir Stephen Fry, Stuart Russell, and Max Tegmark. – Sign now

This future isn’t locked in. But the window to influence it is closing fast. We have less than 100 weeks to demand strong AI safety laws from our public leaders—before it’s too late.

Join our YouTube community and help shape the future of humanity: https://www.youtube.com/@TheAIRiskNetwork

And sign up for our newsletter to stay in the know about AI extinction risk: https://www.guardrailnow.org/#support

Featured image credit: ©metamorworks from Getty Images via Canva.com

Lindsay Langenhoven

Content Writer