In this explosive episode, John Sherman, Liron Shapira, and Michael from Lethal Intelligence dissect shocking admissions from top AI leaders: they’re terrified of what they’re building — but won’t stop. What happens when fear and profit collide?
It’s one thing for outsiders to warn about AI risk.
It’s another when the people building it start admitting they’re afraid.
In this episode of Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence unpack a chilling new reality: AI leaders — from Anthropic’s Jack Clark to Elon Musk — are confessing they’ve lost control of the systems they’re creating.
But rather than slowing down, they’re doubling down. This episode cuts through the noise to reveal how fear, profit, and ego are now driving humanity’s most dangerous race.
Anthropic co-founder Jack Clark compared his own AI to “a mysterious creature” — something alien, growing beyond comprehension.
Elon Musk said he’s “warned the world,” yet insists on making his own AI “less woke” instead of less dangerous.
The group calls this the new paradox of power: fear without restraint.
When even the architects of superintelligence say they’re scared, it’s not a sci-fi concern — it’s a leadership collapse.
Inside top labs, AI safety teams are being sidelined, restructured, or dissolved.
The reason is simple: safety slows down product launches.
And in the race to dominate AI markets, delay equals defeat.
Liron explains that this is how “alignment theater” replaces real safety work.
Companies know they must look responsible — publish safety papers, host panels — but behind closed doors, model scaling continues unchecked.
As Michael notes: “You can’t regulate what’s happening inside a black box that prints money.”
Governments are trying to respond — but most of the rules being written come from the very companies being regulated.
That’s not safety policy; that’s corporate self-defense.
John compares it to the early days of nuclear testing: the scientists warned, the generals ignored, and only after catastrophe did public outrage force limits.
Except this time, there might not be an “after.”
The episode dives into the psychological roots of the problem.
These CEOs don’t see themselves as reckless — they see themselves as history’s chosen engineers, bringing humanity to the next phase of evolution.
The irony? They fear what they’re doing, yet believe only they can do it “safely.”
It’s not coordination — it’s delusion at scale.
Liron calls it “the hero myth meets extinction risk.”
The hosts argue that true AI regulation can’t come from voluntary pledges or self-policing.
It needs to include:
Anything less is window dressing.
John makes the case that citizens are now the last guardrail.
If we wait for Silicon Valley to self-regulate, we’re spectators in our own extinction story.
The movement must come from outside the labs — parents, workers, voters, educators — demanding that governments step in before AI development crosses the irreversible threshold.
The episode’s message is stark but clear:
AI leaders admit they’re scared, but they’re still building.
That means it’s up to everyone else to act.
The race to superintelligence isn’t just a tech story — it’s a survival story.
And the ending is still unwritten.
📺 Watch the full episode
🔔 Subscribe to the YouTube channel
🤝 Share this blog
💡 Support our work at guardrailnow.org
The AI Risk Network team