In Warning Shots #9, John Sherman explains why AI is moving faster than Congress and why parents and citizens must take urgent action. From CEOs warning of extinction-level risks to the threat of self-improving AI, this episode explores why regulation and public pressure are essential to safeguard our future.
AI is advancing at breakneck speed—faster than governments, regulators, or even the public can keep up. In this episode of Warning Shots, John Sherman joins the discussion to explain why unchecked development could put humanity at existential risk, why Congress is lagging behind, and how ordinary citizens—especially parents—must step in to demand action.
This isn’t a story about far-off science fiction. It’s about jobs, democracy, and the very survival of our children in a world racing toward superintelligence.
👉 Watch the full episode on YouTube to see why this analogy matters for the future of AI.
John Sherman begins with a striking metaphor: a young child walking dangerously along the narrow track of a monorail at Hershey Park while terrified parents scramble below, desperate to stop a tragedy. That raw panic, that instinct to act immediately to protect a child, is the mindset we need for AI. Artificial intelligence is advancing quickly and unpredictably. If we stand by passively, hoping someone else will solve the problem, the consequences could be catastrophic. Just like those parents, society must act fast — the stakes are far higher than a theme park accident; they involve the survival of every family.
What makes this threat impossible to ignore is that even the architects of modern AI are sounding the alarm. Elon Musk estimates a 10–20% chance that AI could cause human extinction within the next decade. Dario Amodei, CEO of Anthropic, gives it 15–25%. Geoffrey Hinton, one of the “godfathers of AI,” has gone further, suggesting the odds could be 50–50. And Sam Altman, head of OpenAI, has admitted his work could mean “lights out for everyone.” These are not doomsayers on the margins—they are the leaders at the very center of AI development. When builders warn their own creation might destroy us, it’s time to listen.
One of the most dangerous scenarios isn’t about what AI does today, but what it could do tomorrow once it begins improving itself. Currently, humans build and refine AI models. But experts warn that when AIs start designing the next generation of AIs, the process could spiral out of control. This “recursive self-improvement” could trigger an intelligence explosion—imagine ChatGPT 9 turning into ChatGPT 9 Billion in a matter of days. At that point, humans would lose the ability to steer or even understand the systems we created. We would be sharing the planet with a new, alien-level intelligence, one that doesn’t necessarily share our values or survival instinct.
👉 Watch the episode to dive deeper into why this “runaway AI” moment is a bright red line for humanity.
Much of the public conversation about AI focuses on the upside: new cures, productivity gains, and breakthroughs once thought impossible. Those benefits are real. But as Sherman explains, that’s only half the truth. Alongside innovation, AI threatens to erase jobs at massive scale, destabilize democracies through disinformation, and even erase the line between truth and fiction. By hearing only about the “opportunities,” society is lulled into a false sense of security—while the existential risks remain unspoken. To make wise decisions, we need the full picture, not just the hype.
👉 Click to watch the episode and see how Sherman separates myth from reality.
Some argue that regulating AI will slow down innovation. But as Sherman points out, innovation without guardrails is reckless. We wouldn’t allow nuclear plants to operate without safety standards or let planes fly without air traffic control. AI is far more dangerous than either. The profit incentives of big tech companies push toward faster, riskier development—not safety. Without strong, enforceable rules, the race toward superintelligence will remain a race toward disaster. Just as with nuclear arms control, global cooperation is essential—especially between the U.S. and China—to avoid an uncontrolled arms race that no one wins.
👉 Watch the episode to learn why regulation isn’t a barrier to progress—it’s the only way forward.
Sherman insists that the most important force for change won’t be tech CEOs or politicians—it will be ordinary citizens. Parents, in particular, have the strongest reason to act: the survival of their children. History shows that big shifts—civil rights, environmental protection, consumer safety—happen when the public demands it. AI safety must be no different. Calling representatives, supporting watchdog groups, raising the issue at schools and community meetings—these small actions add up. If citizens don’t demand accountability, Congress won’t act in time. The clock is ticking, and Sherman reminds us: our kids are counting on us.
👉 Click to watch the episode and see how you can be part of the movement to demand AI safety.
AI is not a far-off science fiction concern—it’s here, it’s moving faster than any government response, and it carries risks that even its creators admit could be catastrophic. As John Sherman emphasizes in this episode, the responsibility doesn’t fall only on Silicon Valley or Congress. It falls on all of us. Parents, communities, and citizens must push for safeguards that ensure our children inherit a future worth living in.
The clock is ticking. The window for action is short. But history proves that when ordinary people demand change, leaders follow.