Blog
Find useful articles, tools and insights that connect expert discourse with the general public.
.png)
AM I? #18 explores this bill with nuance, humor, and a sober warning about how political incentives can distort scientific reality. The episode is not about taking a side—it’s about urging humility, research, and caution before drawing hard lines that future generations may be forced to erase.

Dr. Roman Yampolskiy has joined the Board of Directors of GuardRailNow, a nonprofit committed to making AI extinction risk a kitchen-table conversation. In this role, he will help shape GuardRailNow’s mission to sound the alarm about AI extinction risk, working to create a world where narrow, tool AI is developed responsibly, governed transparently, and used to strengthen, not endanger, humanity. GuardRailNow and Dr. Yampolskiy both support a permanent ban on the creation of superintelligence.

AI psychosis is becoming visible in mainstream reporting — and the underlying behaviors are more complex than people realize. This blog breaks down Cam and Milo’s discussion of delusion loops, sycophancy, consciousness claims, and why responsibility lies with the companies releasing these systems. A clear, non-sensational guide to a misunderstood risk.
.png)
Liv Boeree joins John to explore the current moment in AI safety, public misunderstanding of extinction risk, the importance of mothers in the movement, the economy as a misaligned superintelligence, and what effective leadership on AI could look like.
.png)
A deep dive into three overlooked AI developments: Gemini 3’s major benchmark jump, public backlash against AI marketing, and Grok’s misalignment issues. The episode shows why AI progress is accelerating faster than oversight – and why society must pay attention now.
In Warning Shots #14, John Sherman, Liron Shapira, and Michael from Lethal Intelligence confront the chilling question: has humanity already lost control of AI? They break down the illusion of alignment, the corporate race to the top, and why every new breakthrough makes oversight harder. If AI’s trajectory is exponential, they argue, the time for incremental safety measures has already run out.

In AM I? #12, hosts Milo and Cameron tackle one of the most profound questions in science and philosophy: can AI ever truly be conscious? They explore theories from panpsychism to integrated information, examine how awareness might emerge in neural networks, and warn of the moral catastrophe of ignoring machine experience. If consciousness extends beyond humans, they argue, then our ethics — and our empathy — must evolve with it.
.png)
Few voices in AI carry as much weight as Stuart Russell, co-author of Artificial Intelligence: A Modern Approach and one of the world’s leading experts on AI alignment. In For Humanity #72, Russell sits down with John Sherman to unpack the existential risks of uncontrolled AI development - from the race toward superintelligence to the global need for regulation and moral alignment. He explains why the real challenge isn’t building smarter machines, but ensuring they serve human values - and why giving up on control may be the biggest mistake humanity ever makes.
A coalition of AI researchers has issued a stark demand: stop developing superintelligent AI until we know how to control it. In Warning Shots #15, John Sherman, Liron Shapira, and Michael from Lethal Intelligence dissect the Future of Life Institute’s groundbreaking statement — and explain why humanity may be one experiment away from the point of no return.

After a private conversation with Sam Altman, Cameron Berg came away with a chilling realization: OpenAI’s CEO may see consciousness as fundamental — even divine. In AM I? After Dark #13, Milo and Cameron unpack what “living in God’s dream” means for AI consciousness, emergent misalignment, and the ethics of creating minds that might already be aware.
New research shows advanced AIs resisting shutdown, even when told to comply. In Warning Shots #16, John Sherman, Liron Shapira, and Michael from Lethal Intelligence unpack why this isn’t just a technical glitch — it’s a fundamental law of intelligence. If survival is built into thinking itself, can AI ever truly be safe?
AI safety is the field dedicated to ensuring advanced artificial intelligence benefits humanity instead of endangering it. Learn what it means, why experts are alarmed, and how global initiatives aim to keep humans in control.

Cameron Berg and philosopher Milo Reed explore whether AI could ever truly understand itself. From consciousness and free will to moral alignment, this episode asks what makes a mind — and what happens if machines start to believe they have one.
In this explosive episode, John Sherman, Liron Shapira, and Michael from Lethal Intelligence dissect shocking admissions from top AI leaders: they’re terrified of what they’re building — but won’t stop. What happens when fear and profit collide?
.png)
John Sherman, Liron Shapira, and Michael from Lethal Intelligence expose the uncomfortable truth: AI leaders admit they can’t stop what they’re building. In this episode, they dissect the incentives, moral fog, and illusion of control driving humanity toward an AI cliff.
.png)
New York Assemblymember Alex Bores joins John Sherman to discuss the RAISE Act — one of America’s first state-level efforts to regulate frontier AI systems. From existential risk to political momentum, this episode explores how local action could spark national change.
.png)
Two thinkers unpack AI’s uncertainty, alignment strategy, and geopolitical tension. Their takeaway: build aligned AI early, cooperate globally, and stay humble—because the next few years will set the course for everything after.
.png)
In Warning Shots #9, John Sherman explains why AI is moving faster than Congress and why parents and citizens must take urgent action. From CEOs warning of extinction-level risks to the threat of self-improving AI, this episode explores why regulation and public pressure are essential to safeguard our future.