John Sherman, Liron Shapira, and Michael from Lethal Intelligence expose the uncomfortable truth: AI leaders admit they can’t stop what they’re building. In this episode, they dissect the incentives, moral fog, and illusion of control driving humanity toward an AI cliff.
What happens when the people building superintelligent AI admit they can’t control it?
In this episode of Warning Shots, John Sherman is joined by Liron Shapira (Doom Debates) and Michael (Lethal Intelligence) to analyze a disturbing pattern emerging among industry leaders: open confessions that they’re creating something potentially uncontrollable — and doing it anyway.
The conversation dives deep into the psychology, incentives, and geopolitics fueling a race where everyone claims to want safety, yet no one is actually hitting the brakes.
Anthropic’s Jack Clark compared his own company’s AI to a “mysterious creature” — something he’s afraid of but can’t stop building.
Elon Musk admits his approach is to make his version of AI “less woke,” as if moral framing could solve an existential problem.
As John puts it: “When your smartest minds are scared, and your richest ones are racing, you have a coordination failure — not progress.”
Liron highlights how many “AI safety” initiatives are little more than optics management.
Companies publicly fund ethics teams and publish alignment papers while privately scaling capabilities that move us closer to uncontrollable systems.
It’s not hypocrisy — it’s incentive. Shareholders reward speed, not reflection.
The trio calls it “safety theater”: perform concern in public, optimize profits in private.
Michael argues that most researchers inside labs like OpenAI and DeepMind aren’t villains — they’re trapped.
They genuinely care about safety but believe if they don’t build it, someone else will. It’s a moral prisoner’s dilemma playing out across the entire industry.
This resignation — “someone has to do it” — may be the most dangerous mindset of all.
The group unpacks how attempts at regulation often get co-opted by the very companies they’re meant to restrain.
Executives testify before Congress, promising transparency, while writing the rules that protect their dominance.
Meanwhile, lawmakers struggle to keep up. John likens it to “having toddlers design the seatbelts on a rocket ship” — we’re accelerating before understanding the controls.
Beyond ideology, two forces keep the race alive:
The result? A feedback loop where existential risk becomes background noise. The very idea of extinction turns into a meme instead of a red line.
The hosts warn that if left unchecked, frontier AI could destabilize entire systems:
Liron sums it up bluntly: “You don’t need malicious AI to destroy the world — you just need misaligned incentives.”
John closes with a challenge: “If Congress won’t lead, the public must.”
AI risk isn’t an abstract or elite concern — it’s about keeping humans in charge of their own future.
That means demanding independent oversight, transparency, and a pause on reckless deployment. The clock is ticking, and waiting for the “experts” to save us isn’t an option.
The episode leaves one clear takeaway: no one is steering the ship.
AI companies are moving faster than their conscience, and governments are still searching for the wheel.
Until incentives change, it will be ordinary people — parents, voters, citizens — who decide whether this technology serves humanity or replaces it.
📺 Watch the full episode
🔔 Subscribe to the YouTube channel
🤝 Share this blog
💡 Support our work at guardrailnow.org
The AI Risk Network team