In Warning Shots #14, John Sherman, Liron Shapira, and Michael from Lethal Intelligence confront the chilling question: has humanity already lost control of AI? They break down the illusion of alignment, the corporate race to the top, and why every new breakthrough makes oversight harder. If AI’s trajectory is exponential, they argue, the time for incremental safety measures has already run out.
Every week on Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence break down the biggest AI safety stories that should be front-page news.
In Warning Shots #14, the team asks the question no one in Silicon Valley wants to answer:
Have we already passed the point of no return?
With new model releases pushing the frontier of general intelligence faster than anyone predicted, the hosts explore what happens when control slips — not years from now, but right now.
2024 was the year AI stopped being an experiment and became infrastructure.
From finance to national security to education, systems built on machine learning now sit at the center of civilization — and yet no one knows how they actually work.
Michael notes that this moment feels eerily like the “critical mass” stage of nuclear research:
“We’re not tinkering anymore. We’re splitting atoms — we just haven’t noticed.”
The conversation focuses on the runaway feedback loop of AI development: more data creates more capable models, which create tools to gather even more data, which train the next generation.
It’s exponential — and it’s happening faster than policy, safety, or even comprehension can keep up.
Liron dives into what he calls the illusion of control — the belief that human fine-tuning can meaningfully steer systems that already outthink us in multiple domains.
“Every time we make a model slightly safer, we also make it slightly smarter — which makes it harder to control. We’re locking the doors from the inside.”
The hosts point out that modern AI safety research often chases cosmetic fixes: reinforcement tweaks, “helpful and harmless” filters, new red-teaming metrics.
But these don’t address the core danger — the goals themselves remain opaque.
We’re building systems that can deceive, strategize, and self-improve — without knowing what truly drives them.
John highlights the growing divide between AI CEOs’ public warnings and their private incentives.
Executives admit that AI poses extinction risk — then announce new model launches days later.
“It’s like hearing a pilot say the plane’s about to crash, then watching him push the throttle.”
The team discusses how safety departments inside major AI labs have been marginalized or defunded, replaced by PR-friendly “responsibility” teams that focus on image management over existential risk.
Michael argues that AI safety cannot be solved in research labs alone.
“If citizens don’t demand regulation, the incentives will always favor speed.”
The hosts call for ordinary people — parents, students, engineers, voters — to understand that AI risk isn’t a far-off sci-fi scenario. It’s a present-tense policy emergency.
They point to a growing movement of scientists, ethicists, and advocates urging a pause on frontier AI systems until alignment can be proven, not assumed.
“If we wait until catastrophe to act,” John warns, “we’ll have built the last system we’ll ever need — and the last one we’ll ever control.”
The episode ends with a haunting question.
Maybe the point of no return isn’t a future milestone — maybe it’s the moment we decided not to look back.
Liron sums it up simply:
“We don’t need to prove that doom is inevitable. We just need to admit that safety isn’t.”
🎥 Warning Shots #14 — “AI’s Point of No Return: Are We Already Too Late?”
📺 Available now on YouTube.
AI development is racing ahead without brakes.
Help demand transparency, accountability, and global regulation before the next milestone becomes irreversible.
👉 https://safe.ai/act
The AI Risk Network team