Inside the AI Power Struggle: Why No One’s Actually in Control — Warning Shots #13

John Sherman, Liron Shapira, and Michael from Lethal Intelligence expose the uncomfortable truth: AI leaders admit they can’t stop what they’re building. In this episode, they dissect the incentives, moral fog, and illusion of control driving humanity toward an AI cliff.

Written by
The AI Risk Network team
on

Introduction

What happens when the people building superintelligent AI admit they can’t control it?

In this episode of Warning Shots, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to analyze a disturbing pattern emerging among industry leaders: open confessions that they’re creating something potentially uncontrollable — and doing it anyway.

The conversation dives deep into the psychology, incentives, and geopolitics fueling a race where everyone claims to want safety, yet no one is actually hitting the brakes.

What We Explore in This Episode

  • AI CEOs admitting they’ve lost control
  • “Safety theater” and regulatory capture in Silicon Valley
  • Why well-intentioned researchers still push forward
  • The profit and ego incentives behind the AGI race
  • Whether society can ever enforce real limits on AI
  • What happens when extinction risk becomes background noise

The Fear Behind the Curtain

Anthropic’s Jack Clark compared his own company’s AI to a “mysterious creature” — something he’s afraid of but can’t stop building.
Elon Musk admits his approach is to make his version of AI “less woke,” as if moral framing could solve an existential problem.

As John puts it: “When your smartest minds are scared, and your richest ones are racing, you have a coordination failure — not progress.”

Safety Theater and the Illusion of Control

Liron highlights how many “AI safety” initiatives are little more than optics management.
Companies publicly fund ethics teams and publish alignment papers while privately scaling capabilities that move us closer to uncontrollable systems.

It’s not hypocrisy — it’s incentive. Shareholders reward speed, not reflection.
The trio calls it “safety theater”: perform concern in public, optimize profits in private.

Why Good People Keep Building Dangerous AI

Michael argues that most researchers inside labs like OpenAI and DeepMind aren’t villains — they’re trapped.
They genuinely care about safety but believe if they don’t build it, someone else will. It’s a moral prisoner’s dilemma playing out across the entire industry.

This resignation — “someone has to do it” — may be the most dangerous mindset of all.

Regulatory Capture and the Missing Adults

The group unpacks how attempts at regulation often get co-opted by the very companies they’re meant to restrain.
Executives testify before Congress, promising transparency, while writing the rules that protect their dominance.

Meanwhile, lawmakers struggle to keep up. John likens it to “having toddlers design the seatbelts on a rocket ship” — we’re accelerating before understanding the controls.

The Economic and Psychological Engines

Beyond ideology, two forces keep the race alive:

  • Economic pressure — whoever reaches AGI first wins trillions.
  • Ego — tech founders see themselves as history’s protagonists.

The result? A feedback loop where existential risk becomes background noise. The very idea of extinction turns into a meme instead of a red line.

What Happens If We Do Nothing

The hosts warn that if left unchecked, frontier AI could destabilize entire systems:

  • Job markets collapsing faster than they can adapt
  • Truth itself eroding under synthetic media
  • Autonomous agents making irreversible decisions

Liron sums it up bluntly: “You don’t need malicious AI to destroy the world — you just need misaligned incentives.”

Why Citizens Must Step In

John closes with a challenge: “If Congress won’t lead, the public must.”
AI risk isn’t an abstract or elite concern — it’s about keeping humans in charge of their own future.

That means demanding independent oversight, transparency, and a pause on reckless deployment. The clock is ticking, and waiting for the “experts” to save us isn’t an option.

Closing Thoughts

The episode leaves one clear takeaway: no one is steering the ship.
AI companies are moving faster than their conscience, and governments are still searching for the wheel.

Until incentives change, it will be ordinary people — parents, voters, citizens — who decide whether this technology serves humanity or replaces it.

Take Action

📺 Watch the full episode
🔔 Subscribe to the YouTube channel
🤝 Share this blog
💡 Support our work at guardrailnow.org

The AI Risk Network team