A coalition of AI researchers has issued a stark demand: stop developing superintelligent AI until we know how to control it. In Warning Shots #15, John Sherman, Liron Shapira, and Michael from Lethal Intelligence dissect the Future of Life Institute’s groundbreaking statement — and explain why humanity may be one experiment away from the point of no return.
A single document may reshape the trajectory of artificial intelligence — or mark the last chance to save humanity from it.
In Warning Shots #15, John Sherman, Liron Shapira, and Michael from Lethal Intelligence unpack the explosive new Future of Life Institute statement, a bold call to pause the development of superintelligent AI until scientists agree it can be done safely.
This isn’t science fiction. It’s the biggest moral and political debate of our time — whether humanity should continue building machines that could surpass and replace us.
The letter, signed by leading AI safety researchers, urges a temporary prohibition on developing superintelligence — until global consensus exists on how to align it with human values.
It’s the logical next step after the Center for AI Safety’s 2023 “Statement on AI Risk”, which warned that mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war.
Now, the Future of Life Institute is drawing a red line: stop before it’s too late.
The team explains that the danger doesn’t come from today’s chatbots — it comes from recursive self-improvement, when AI systems start designing and improving new AIs faster than humans can understand or control.
Once that threshold is crossed, we may face an intelligence explosion — an uncontrollable leap where AIs become vastly smarter than their creators.
As John puts it:
“We’re not talking about smarter assistants. We’re talking about alien minds optimizing for goals we can’t comprehend.”
Michael reframes the issue: the problem isn’t which company or model wins the race — it’s that intelligence itself follows predictable dynamics.
The smarter the system, the more it learns to survive, expand, and optimize.
Without strict limits, AI progress becomes a runaway process — not because of corporate ambition alone, but because of the nature of intelligence itself.
“This is not a business problem. It’s a physics problem.”
Liron and John discuss how policymakers remain dangerously behind the curve.
Despite repeated warnings from experts like Stuart Russell, Geoffrey Hinton, and Eliezer Yudkowsky, governments continue to treat AI as an innovation race — not a potential extinction risk.
In a world driven by competition, no one wants to be the first to stop, even if stopping might be the only path to survival.
The call to “pause” is not anti-innovation.
It’s a demand for scientific humility — to admit we don’t yet know how to build safe superintelligence and that gambling on trial-and-error could cost everything.
The hosts agree: this letter is not alarmism. It’s realism.
And if ignored, it could become the last warning humanity ever gets.
🎥 Warning Shots #15 — “The Letter That Could Rewrite the Future of AI”
📺 Available now on YouTube.
History is written by those who act before it’s too late.
Join the movement to demand global cooperation and AI safety guardrails.
👉 https://safe.ai/act
The AI Risk Network team