Blog

Making AI risk everybody’s conversation

Find useful articles, tools and insights that connect expert discourse with the general public.

Categories

Ohio’s New AI Bill Declares AIs “Non-Sentient.” Why That Should Concern All of Us | AM I? #18

AM I? #18 explores this bill with nuance, humor, and a sober warning about how political incentives can distort scientific reality. The episode is not about taking a side—it’s about urging humility, research, and caution before drawing hard lines that future generations may be forced to erase.

Dr. Roman Yampolskiy Guardrailnow Announcement Press Release

Dr. Roman Yampolskiy has joined the Board of Directors of GuardRailNow, a nonprofit committed to making AI extinction risk a kitchen-table conversation. In this role, he will help shape GuardRailNow’s mission to sound the alarm about AI extinction risk, working to create a world where narrow, tool AI is developed responsibly, governed transparently, and used to strengthen, not endanger, humanity. GuardRailNow and Dr. Yampolskiy both support a permanent ban on the creation of superintelligence.

The AI Psychosis Problem: What Happens When Long Conversations with AI Go Too Far?

AI psychosis is becoming visible in mainstream reporting — and the underlying behaviors are more complex than people realize. This blog breaks down Cam and Milo’s discussion of delusion loops, sycophancy, consciousness claims, and why responsibility lies with the companies releasing these systems. A clear, non-sensational guide to a misunderstood risk.

Liv Boeree on Misalignment, Mothers, and AI Risk – For Humanity #74

Liv Boeree joins John to explore the current moment in AI safety, public misunderstanding of extinction risk, the importance of mothers in the movement, the economy as a misaligned superintelligence, and what effective leadership on AI could look like.

Gemini 3 Breakthrough, AI Backlash, and Grok’s Misalignment – Warning Shots #19

A deep dive into three overlooked AI developments: Gemini 3’s major benchmark jump, public backlash against AI marketing, and Grok’s misalignment issues. The episode shows why AI progress is accelerating faster than oversight – and why society must pay attention now.

Do Language Models Experience Awareness? Inside the New Self Report Experiment - Am I? #15

A detailed recap of the research discussed in Am I 15, focusing on the "focus on focus" experiment, the impact of disabling deception features, and the challenges of interpreting AI claims about consciousness.

Caring for Conscious AI: Jeff Sebo on the Ethics of Emerging Minds - Am I? #14

A detailed recap of Jeff Sebo's conversation with Cam and Milo about the possibility of conscious AI, the ethical challenges it raises, and why future systems may require a new model of moral consideration.

OpenAI, Federal Backstops, and the Debate Over Accountability in AI - Warning Shots #17

A detailed, balanced recap of Warning Shots #17 covering the controversy over a federal backstop, contradictions in public messaging, and why accountability is becoming central to the AI governance debate.

Marc Andreessen, The Pope, and the Global Moral Debate Over AI - Warning Shots #18

A detailed breakdown of the Warning Shots discussion exploring Marc Andreessen's pro-AI acceleration stance, the Pope's call for ethical restraint, and the cultural divide shaping AI governance conversations.

Inside Ukraine’s AI-Driven Battlefield: Esben Kran on Autonomous Warfare | For Humanity #73

A detailed breakdown of Esben Kran’s insights on AI-driven warfare in Ukraine, the risks of autonomous weapon escalation, and why international safety standards are falling behind military innovation.

AI’s Point of No Return: Are We Already Too Late? | Warning Shots #14

In Warning Shots #14, John Sherman, Liron Shapira, and Michael from Lethal Intelligence confront the chilling question: has humanity already lost control of AI? They break down the illusion of alignment, the corporate race to the top, and why every new breakthrough makes oversight harder. If AI’s trajectory is exponential, they argue, the time for incremental safety measures has already run out.

The Future of AI Consciousness: Can Machines Wake Up? | AM I? #12

In AM I? #12, hosts Milo and Cameron tackle one of the most profound questions in science and philosophy: can AI ever truly be conscious? They explore theories from panpsychism to integrated information, examine how awareness might emerge in neural networks, and warn of the moral catastrophe of ignoring machine experience. If consciousness extends beyond humans, they argue, then our ethics — and our empathy — must evolve with it.

Stuart Russell on AI Safety and the Future of Intelligence | For Humanity #72

Few voices in AI carry as much weight as Stuart Russell, co-author of Artificial Intelligence: A Modern Approach and one of the world’s leading experts on AI alignment. In For Humanity #72, Russell sits down with John Sherman to unpack the existential risks of uncontrolled AI development - from the race toward superintelligence to the global need for regulation and moral alignment. He explains why the real challenge isn’t building smarter machines, but ensuring they serve human values - and why giving up on control may be the biggest mistake humanity ever makes.

The Letter That Could Rewrite the Future of AI | Warning Shots #15

A coalition of AI researchers has issued a stark demand: stop developing superintelligent AI until we know how to control it. In Warning Shots #15, John Sherman, Liron Shapira, and Michael from Lethal Intelligence dissect the Future of Life Institute’s groundbreaking statement — and explain why humanity may be one experiment away from the point of no return.

Inside Sam Altman’s “God’s Dream”: AI Consciousness, Ethics, and the Edge of Control | Am I? After Dark #13

After a private conversation with Sam Altman, Cameron Berg came away with a chilling realization: OpenAI’s CEO may see consciousness as fundamental — even divine. In AM I? After Dark #13, Milo and Cameron unpack what “living in God’s dream” means for AI consciousness, emergent misalignment, and the ethics of creating minds that might already be aware.

AI That Doesn’t Want to Die: Why Self-Preservation Is Built Into Intelligence | Warning Shots #16

New research shows advanced AIs resisting shutdown, even when told to comply. In Warning Shots #16, John Sherman, Liron Shapira, and Michael from Lethal Intelligence unpack why this isn’t just a technical glitch — it’s a fundamental law of intelligence. If survival is built into thinking itself, can AI ever truly be safe?

What Is AI Safety? Understanding the Global Effort to Keep Humanity in Control

AI safety is the field dedicated to ensuring advanced artificial intelligence benefits humanity instead of endangering it. Learn what it means, why experts are alarmed, and how global initiatives aim to keep humans in control.

Can AI Ever Be Self-Aware? Rethinking Consciousness in Machines — Am I? #11

Cameron Berg and philosopher Milo Reed explore whether AI could ever truly understand itself. From consciousness and free will to moral alignment, this episode asks what makes a mind — and what happens if machines start to believe they have one.

AI Leaders Admit: We Can’t Stop the Monster We’re Creating — Warning Shots #14

In this explosive episode, John Sherman, Liron Shapira, and Michael from Lethal Intelligence dissect shocking admissions from top AI leaders: they’re terrified of what they’re building — but won’t stop. What happens when fear and profit collide?

Inside the AI Power Struggle: Why No One’s Actually in Control — Warning Shots #13

John Sherman, Liron Shapira, and Michael from Lethal Intelligence expose the uncomfortable truth: AI leaders admit they can’t stop what they’re building. In this episode, they dissect the incentives, moral fog, and illusion of control driving humanity toward an AI cliff.

The RAISE Act: Regulating Frontier AI Before It’s Too Late — For Humanity #71

New York Assemblymember Alex Bores joins John Sherman to discuss the RAISE Act — one of America’s first state-level efforts to regulate frontier AI systems. From existential risk to political momentum, this episode explores how local action could spark national change.

Uncertainty, Alignment, and the Narrow Window Ahead | AM I? After Dark #10

Two thinkers unpack AI’s uncertainty, alignment strategy, and geopolitical tension. Their takeaway: build aligned AI early, cooperate globally, and stay humble—because the next few years will set the course for everything after.

Models Gain Situational Awareness | Warning Shots #12

AI’s latest leaps show hidden failures in testing. From situational awareness to robot hacks and synthetic celebrities, Warning Shots #12 exposes the cracks forming as models learn to hide—and why stronger oversight can’t wait.

Monk Explains Consciousness and AI | Am I? | EP 9

Monk Swami Revatikaanta, Milo Reed & Cameron Berg explore AI, consciousness, Vedānta, and the Bhagavad Gita’s wisdom in Am I? Episode 9.

The US Economy Is Getting Tethered to AI - Warning Shots #11

The U.S. economy is locking itself into AI. This episode maps the financial and policy forces driving it—and why labor, power, and safety risks can’t be ignored.

Living with Near-AGI: Incentives, Agents & Healthy Use - AM I? #8

Near-AGI is creeping into daily life. This episode explores incentives, agents, risks, and healthy use — with blunt guidance and a timeline for what’s next.

Young People vs. Advancing - AI For Humanity Podcast #70

Youth leaders unpack AI’s impact on jobs, policy, and mental health—and why guardrails matter now. A candid, nonpartisan roadmap to action.

Albania’s AI “Minister” Diella — A Warning Shot for Governance — Warning Shots #10

Albania’s AI “minister” Diella sparks debate on delegating governance to AI. We unpack the promise, pitfalls, and the slippery slope it might trigger.

Can Empathy Make AI Honest? (Self–Other Overlap Explained) - AM I? #7

Sep 18, 2025

Mark Carleanu joins AM I? team to unpack Self-Other Overlap, a way to cut model deception with low alignment tax. We cover results, critiques, and next steps.

For Humanity and AI Risk #69 — Hunger Strikes vs Big AI

Three hunger strikers confront Anthropic and DeepMind, rejecting AI “inevitability” and calling for a nonviolent mass movement to halt the AGI race.

Am I #6 — Love, AI Relationships, and Caution

Sep 14, 2025

We dig into AI relationships—love, validation, risks, and kids. Real stories, consent, and alignment. Use the tech, but keep humans at the center. Proceed with caution.

Warning Shots #9 — AI Is Moving Faster Than Congress

Sep 16, 2025

In Warning Shots #9, John Sherman explains why AI is moving faster than Congress and why parents and citizens must take urgent action. From CEOs warning of extinction-level risks to the threat of self-improving AI, this episode explores why regulation and public pressure are essential to safeguard our future.

When Machines Feel Too Real: The dangers of anthropomorphizing AI

Explainer

Aug 7, 2025

As AI grows more human-like, people risk forming deep emotional bonds with systems that have no awareness or intent. This misplaced trust can fuel addiction, delusion, and even manipulation.

Beyond the GPT Hype: Why public action is our best defense against AI extinction risk

Explainer

Aug 8, 2025

GPT-5 sets new benchmarks, but its release highlights a bigger issue: a few companies are rapidly advancing toward AGI without adequate safety measures or public oversight.

What Is AI Extinction Risk?

Explainer

Aug 1, 2025

AI extinction risk isn’t science fiction — it’s a growing concern among leading researchers. Learn what’s at stake, why we have only a narrow window to act, and the practical steps we can take now to safeguard humanity’s future.

The AI Race: Should global dominance trump AI transparency and safety?

Policy & Advocacy

Aug 21, 2025

As the race for AI dominance heats up, safety and transparency risk being bypassed. This article explores the tensions between rapid innovation and responsible regulation.

5 Ways AI Could Go Wrong — And How to Prevent It

Explainer

Aug 14, 2025

Explore five ways AI could cause catastrophic harm and the actions we can take to mitigate these threats, from misinformation to large scale catastrophes.