Humanoid Robots, AI Math Breakthroughs & Bernie Sanders' Data Center Ban | Warning Shots #35

Congress, humanoid robots, and frontier AI breakthroughs reshape the conversation around superintelligence risk-and what we need to do about it.

Written by
The AI Risk Network team
on
Apr 1, 2026

The Overton Window is Shifting: How AI Risk Moved from Fringe to Congress

A year ago, warning about superintelligent AI ending humanity sounded like science fiction. Today, a U.S. Senator is standing on the Senate floor with a poster showing a 10-20% extinction probability.

That's not a coincidence. It's the Overton window shifting in real time.

In this episode of Warning Shots, John Sherman, Liron Shapira, and Michael break down why the conversation around AI risk has moved from fringe to mainstream-and what that means for governance, technology, and human survival.

Humanoid Robots Aren't Just Cool Demos

The episode opens with footage from the White House: a humanoid robot interacting with the First Lady. On the surface, it looks like a photo op. But beneath the ceremony is a much darker question: when AI systems stop waiting for prompts and start acting autonomously in the physical world, how much control do we actually have?

Michael makes the crucial distinction. According to him, these aren't just impressive party tricks. "The robot is learning to navigate our world. It's walking, gesturing, interacting real-time while the underlying AI brains behind it are scaling up fast towards something far more powerful."

The common refrain from AI skeptics is that large language models and AI systems are "just waiting for the next prompt." They're passive. They live in computers. The White House robot footage demolishes that argument. An autonomous system moving through physical space without constant human instruction is a different category of risk entirely.

Bernie Sanders Breaks the Overton Window

Then comes the political earthquake. Senator Bernie Sanders held a press conference demanding a data center moratorium and explicitly raising extinction risk. He stood before Congress with charts and warnings-the kind of thing that would've been laughed out of the room two years ago.

Liron points out the magnitude of this shift: "If you told us in 2023 that in a couple of years some of the most popular senators in Congress would be like 'we need a data center ban because AI is out of control,' we'd be stunned. It seems like civilization is waking up and not wanting to die."

But here's the complexity: Sanders is considered a political outlier. He's a self-declared democratic socialist. His push for strict regulation comes wrapped in his broader skepticism of tech oligarchs and capitalism. That framing risks turning AI safety into a partisan issue when it should be universal.

Still, Liron notes there's a larger coalition emerging. According to the team, roughly 40 members of Congress have made statements indicating serious concern about AI extinction risk-far more than just Sanders. The ground swell is real.

Frontier Math: When AI Surpasses Human Intelligence

One of the episode's most significant technical discussions centers on frontier mathematics. Open problems that mathematicians have struggled with for years have begun to fall to AI systems. This isn't homework. These are unsolved research problems at the frontier of human knowledge.

Michael emphasizes why this matters: "Mathematical reasoning is one of the cleanest tests of genuine intelligence we have. So it's abstract, precise. When models start clearing these hurdles, they're not just getting smarter at homework. They're gaining the exact kind of flexible, inventive thinking that lets them tackle the next big challenge-which is basically improving themselves."

Once AI systems can do AI research better than humans, the timeline collapses. We move from "tools that can help us" to "agents that can outpace us." And the velocity accelerates from decades to years to months.

The Goalpost Shift Nobody's Talking About

Liron brings up a pattern that's been repeated endlessly: as AI achieves what skeptics said it couldn't, the skeptics simply move the goalpost. Someone claims AI can't do novel mathematics. Then it does. So they claim it can't create billion-dollar businesses. Then it might. The line between "AI is just an autocomplete" and "AI is superintelligent" keeps moving, and nobody seems to know where it actually is.

"I'm open to the idea that there's a line," Liron says, "but nobody knows where the line is. And it's basically just the creep is crazy."

Jensen Huang's Immortality Fantasy

The team addresses Nvidia CEO Jensen Huang's recent statements: "AGI is already here" and "we're all going to live forever." Michael interprets this through the lens of power and mortality. When you've already won the game of money and influence, death becomes the last opponent.

But Michael raises a sobering point: even if consciousness uploading becomes possible, there's no guarantee a superintelligent system will preserve your humanity in the digital version. It might create a copy that looks and acts like you, but through iterative optimization for purposes you don't understand, it could strip away everything that makes you human.

It's not immortality. It's replacement dressed up as transcendence.

The Robot Dentist and Capability Creep

Toward the end, the team discusses a robot dentist that completed dental surgery in 15 minutes instead of the 2 hours a human dentist would need. Published in a peer-reviewed journal, it's a legitimate breakthrough.

But Michael sees it as a warning signal: "It's not just a cool data upgrade. It's a visible crack in the wall separating narrow AI that helps in controlled settings from the kind of super-intelligent systems that will soon operate at scale we cannot fully predict or contain."

Scale that up. Connect multiple robots across hospitals. Give them broader decision-making power. Link them to smarter general AI. Now you have tools that touch the most vulnerable parts of our lives-our bodies, our health data, our literal survival.

What Happens Next?

The episode captures a genuine moment of inflection. AI risk has moved from academic speculation to congressional floor speeches. The public is waking up. But the pace of policy change is glacially slow compared to the pace of AI capability growth.

As Michael puts it: "We need to prove these systems are controllable before we let them come out and be smarter than us. And we need to move much faster than this if we want to have a chance."

The conversation isn't theoretical anymore. It's urgent. It's political. And for the first time, it's actually being heard.

Take Action:The AI safety movement needs public support to push for responsible governance. Learn more at https://safe.ai/act

The AI Risk Network team