When AI Fear Turns to Anger - Warning Shots #38 | GuardRailNow

The hosts of Warning Shots address the violent incidents at Sam Altman's home, Ukraine's first robot-held battlefield, and an AI model threatening global banking encryption - and why peaceful democratic action is the only legitimate path forward.

Written by
on
Apr 16, 2026

When Public Fear Becomes Public Anger - And What the AI Safety Movement Must Do About It

Something shifted this week. Not in a lab. Not in a benchmark. In the street.

Two separate violent incidents targeted OpenAI CEO Sam Altman's home. For a movement built around the idea that AI development poses serious risks to humanity, this moment demands a clear and unambiguous response - not a careful one, not a nuanced one. A clear one.

The hosts of Warning Shots - John Sherman, Liron Shapira, and Michael - opened episode 38 with exactly that. This post covers what they said, why it matters, and what the three other major stories of the week tell us about where things are heading.

Our Position on Violence Is Unconditional

Before anything else, this needs to be stated plainly.

The AI Risk Network, Warning Shots, and GuardRailNow unconditionally condemn all acts of violence against any person - regardless of their role in AI development. There is no argument, no cause, and no circumstance that makes violence an acceptable form of advocacy. The path forward is democratic, legal, and peaceful. Nothing else.

This is not a legal disclaimer. It is a conviction the hosts hold and stated on record, at length, without hedging.

John opened the episode by framing the incidents as "totally unacceptable" and "entirely ineffective." Liron was equally direct: he sees zero nuance here, zero room for sympathy toward the individuals involved, and zero strategic or moral justification for what happened.

Michael added the practical dimension that often gets missed in these conversations: actions like this don't advance the cause of AI safety - they actively set it back. When violence becomes associated with AI risk advocacy, every serious researcher, communicator, and policymaker working on these issues gets painted with the same brush. Positions harden. Dialogue closes. The opposition gains a gift they didn't earn.

The hosts also named a second problem: a segment of public opinion treated these attacks the way some people responded to the killing of a healthcare executive last year - as though targeting a powerful individual is a meaningful form of protest. The hosts reject this framing completely. It does not matter what you think of any individual's decisions within the AI industry. Targeting people with violence is wrong, and celebrating it is wrong too.

Liron made the strategic case clearly: the movement to slow AI development doesn't need weapons. It needs votes, advocacy, informed public pressure, and persuasion. When you have enough people genuinely convinced this issue demands action, you already have everything required to elect the right people and pass the right laws. The moment violence enters the picture, those legitimate tools get discarded - and the people doing the most important work on AI governance pay the price.

Why This Week Also Revealed Something Real About Public Sentiment

Condemning the incidents and understanding what produced them are not the same thing - and the hosts were careful not to conflate them.

According to Michael, more than 50% of people globally are now reporting anxiety about AI's effects on their lives. Job displacement, economic uncertainty, a sense that decisions with enormous consequences are being made without public input or meaningful oversight. That fear is real. It has been building for years. And dismissing it as doomerism or Luddite sentiment, as some in the tech industry have done, only makes the underlying pressure worse.

The hosts' argument: the AI safety movement's job is to give that fear somewhere constructive to go. Toward policy. Toward legislation. Toward the kind of sustained public pressure that actually changes how decisions get made. The incidents this week are a signal that the movement has work to do on that front - not a justification for anything that happened.

Sam Altman's 13-Page Letter - Thoughtful in Places, Contradictory at Its Core

Within 48 hours of the incidents at his home, Altman published a lengthy document outlining his thinking on how AI will reshape the economy and what society might need to do to manage the disruption.

The hosts gave him some room on timing. When something like this happens to you personally, writing down your thoughts is understandable.

On the substance, Michael described the letter as containing genuinely thoughtful elements - proposals around public wealth funds, adaptive safety nets, and acknowledgment of real risks including job displacement, power concentration, misalignment, and loss of control. These are not trivial admissions from the CEO of the world's most prominent AI company.

But the tension the hosts keep returning to is not new, and the letter did not resolve it. The people building these systems are increasingly saying in public that the risks are serious - and continuing to accelerate development anyway. As John put it: every time Altman says things might not go well, the natural question is why he keeps going. That contradiction sits at the center of every OpenAI communication, and a 13-page letter does not dissolve it.

Ukraine: Robots Hold a Battlefield for the First Time in History

This story received less attention than it deserved in the broader news cycle. President Zelensky announced that for the first time in recorded history, unmanned robotic systems were used to secure a battlefield position from human combatants. The robots won.

The numbers behind this milestone are significant. Ukraine has now completed over 22,000 missions using unmanned ground systems in the past three months. Robots are handling more than half of certain operational tasks. Over 280 companies are currently building ground robots inside Ukraine, with production targets above 20,000 units this year - at unit costs as low as $7,000 per unit. Locally made, affordable, and proliferating fast.

Liron's framing is worth sitting with: the science fiction scenarios that felt comfortably distant five years ago are arriving week by week, ahead of any cultural or institutional schedule built to process them.

Michael connects this to a pattern the show has tracked for months - AI capabilities that don't scale gradually but cross thresholds. Discrete leaps where the defensive landscape changes faster than any response can adapt. The gap between "impressive tool" and "system operating beyond reliable human oversight" is narrowing. This battlefield moment is one data point in that trend, not an isolated event.

A New AI Model Is Threatening Global Banking Encryption

The episode closes on a story that triggered emergency meetings at the U.S. Treasury Department and the White House: a new AI model, referred to as Mythos, that poses a credible near-term threat to the encryption infrastructure underlying global banking.

Liron's threat model is not the dramatic single-event collapse that makes for easy headlines. It is something more unsettling - a period of cascading vulnerabilities. Services going down. Systems compromised in ways that are slow to detect and hard to attribute. Essential infrastructure running unreliably for extended periods. Many serious researchers now place a scenario of this kind within a 12-month window. As of this episode, no response from Washington matches the speed of the problem.

Michael's framing is direct: this should not be treated as an isolated banking sector story. It is a warning signal about the gap between what AI systems can now do and what the institutions built to oversee them are capable of managing.

John closes with a strategic question he wants the AI safety community to take seriously before the next crisis, not after it: when a major AI-caused disruption hits public life - and the evidence suggests it will - two answers will be on the table. One will be: use more AI to fix what AI broke, and keep accelerating. The other will be: slow down, pause capabilities development, and use the moment to course correct.

His argument is that the movement needs to be building the case for the second answer now. The pivot point, when it comes, will move fast.

The Path Forward Is Democratic, Legal, and Peaceful

Warning Shots exists to have honest, rigorous conversations about where this technology is heading. This week required covering a story that none of the hosts wanted to be covering. They did it anyway - clearly, on the record, and without ambiguity about where they stand.

If the stakes in this episode resonate with you, the most meaningful action you can take right now is here:

https://safe.ai/act

Watch the full episode on The AI Risk Network:https://www.youtube.com/@theairisknetwork