A frontier AI model accessed by random Discord users. China confirmed stealing US research. A robot wins a half marathon. In Warning Shots #39, John Sherman, Michael, and Liron Shapira break down seven stories that reveal how thin the line of control really is.
There is a particular kind of irony that only the AI moment can produce. A frontier model so powerful that emergency government meetings were called over it, so dangerous that only a handful of the world's largest companies were permitted access, and yet accessible - for weeks - to anonymous users on a Discord server who simply guessed the right URL.
That is where Warning Shots #39 begins. And it does not get calmer from there.
In this week's episode, John Sherman, Michael of Lethal Intelligence, and Liron Shapira of Doom Debates work through seven stories that, taken together, paint a picture of a world where AI capabilities are accelerating and the systems meant to manage that acceleration are visibly fraying.
Anthropic's Mythos model had been positioned as something exceptional - a system restricted from public release due to its potential to compromise encryption and destabilize financial systems. The irony, as Michael put it on the show, writes itself: "Anthropic, we have developed Mythos, a next-generation cybersecurity tool that will make security flaws a thing of the past. And five seconds later, Anthropic, we regret to inform you that some hackers stole our Mythos."
The breach did not happen through a sophisticated operation. According to Michael, the group made an educated guess about the model's location based on Anthropic's known naming conventions for other models - a trial-and-error approach that happened to work. No exploit. No inside access. Just pattern recognition and persistence.
Liron's framing cuts to the deeper issue: "All of the assurances we get about having things under control - it's like, no. There's still just regular people making it up day by day." The leak is not primarily a story about one model or one company. It is a data point about what the hosts describe as a fundamental mismatch - between the capability of these systems and the human institutions supposedly containing them. If a small group on Discord can access the world's most restricted model, Liron argues, it is a reasonable assumption that nation-state actors already have it.
The second story picks up directly from the first. The director of the White House Office of Science and Technology Policy confirmed what the Gladstone report had outlined in detail more than a year ago: China is running coordinated distillation attacks against US frontier models.
Michael explained the mechanism clearly. Rather than recreating years of research from scratch, these attacks use thousands of proxy accounts and jailbreak techniques to systematically query the best US systems and extract their intelligence - rebuilding cheaper, lighter versions on the other side. His analogy: becoming a master chef by tasting thousands of meals instead of training for years.
The implication the hosts kept returning to is this: if the race framing is meant to justify moving fast, it has to account for the reality that the lead is not secure. Liron described the dynamic bluntly - even if the US builds a year's advantage, the value of that lead depends entirely on being able to use it to pull the ladder up. And neither the Mythos breach nor the distillation attacks suggest that ladder exists.
Michael added what he considers the most important point: "Once these systems get smart enough to improve themselves, the difference between American, Chinese, open source - none of this even matters. Uncontrolled intelligence doesn't care about passwords."
Story three shifts to what happens when AI moves from the lab into the machinery of governance. The United Arab Emirates announced plans to have AI agents handling 50% of government operations within two years.
Michael's concern is less about today's AI and more about trajectory. "Agentic systems are the on-ramp to far more capable agents. Once they're embedded in the machinery of state, making real-time calls on services that affect millions, the window for course correction narrows very fast." He also raised the accountability question that has no clean answer yet: when an AI system makes an error that affects a citizen's life, who is responsible?
The deeper concern Liron raised is about reversibility. Gradual disempowerment is only gradual until it isn't. The question of who controls the compute underpinning AI government systems is, at some point, also the question of who controls the government.
Florida's Attorney General has opened a criminal investigation into OpenAI following a case in which a user exchanged more than 13,000 messages with ChatGPT about planning a school shooting - including specific guidance on weapons, locations, and timing.
John acknowledged the lawsuit faces legal challenges, but his position on the broader principle is clear: all friction is meaningful. The case raises a harder question he posed directly: what volume of conversation, and what content, should trigger internal escalation at an AI company? He referenced a separate Canadian case in which OpenAI executives spent four months emailing each other about whether to intervene with a user discussing a school shooting - and ultimately decided not to.
Michael's analysis extended beyond the immediate case. The argument that ChatGPT is no more culpable than a Google search misses something important about scale and specificity. An AI system that generates 13,000 tailored, contextual messages coaching a plan is doing something categorically different from returning a list of links. As capability increases, he argues, so does the potential for that specificity to be applied at greater scale and with greater sophistication.
A Chinese humanoid robot completed a half marathon faster than any human ever has. John framed it as the kind of moment that changes minds - the sort of visible, physical demonstration of AI capability that abstract benchmark numbers rarely produce.
Michael's read is that the race result itself is almost beside the point. The significance is what it marks: the crossing of a threshold from AI as software to AI as something that moves through the physical world with increasing reliability. "Last year they couldn't walk. Now they're winning races. If you extrapolate even a little, you can see what's coming."
His scenario is not distant. Once the mechanical platform is stable and the cognitive systems reach the necessary capability level, he argues, you get robots building robots, automating supply chains end to end - from raw material extraction to finished product. "This is not going to be decades. It's going to be a discontinuity in physical reality."
Liron added a personal dimension John had raised: even if a home robot is genuinely useful, the security exposure of any internet-connected, physically capable system in your home is not trivial. The ransomware scenario he described - a hacker gaining access to a robot with physical capability and demanding payment to release it - is not science fiction. It is an extension of attack patterns that already exist.
The episode closes with a story John flagged as breaking news on the morning of recording: Polymarket was showing 85% odds of a nationwide US ban on new data center construction. Maine had already passed an 18-month moratorium, with at least 12 other states considering similar measures.
All three hosts expressed some version of support for the principle, even while questioning the specifics. Liron's position was characteristically direct: "Is it kind of dumb that China hasn't agreed to stop building their data centers? Yeah, it's kind of dumb. But ultimately, is it better than climbing ahead? Yeah, I'm happy to start throwing sand in the gears right now."
Michael noted that the communities pushing back are not doing so in abstract terms. They are responding to real impacts - rising electricity costs, water consumption, land use - while the financial beneficiaries of those facilities sit far removed from the consequences. John's point was more political: there is value in demonstrating publicly that the accelerationist agenda does not have the public's unconditional consent.
Every story in this episode connects to a single underlying question: who is actually in control of this technology, and what happens when the answer turns out to be "fewer people than we assumed"?
A model restricted to forty companies ends up on Discord. A distillation attack means the most expensive research can be reproduced cheaply. A government hands decision-making to AI agents. An autonomous weapons system removes humans from the kill chain. A consumer AI helps plan violence over 13,000 messages. A robot wins a physical competition no human can match. And the infrastructure powering all of it is being built faster than any governance structure can catch up.
John, Michael, and Liron are not predicting exact outcomes. They are pointing at a pattern - and asking, week after week, whether anyone in a position to respond to that pattern is actually doing so.
The full episode is available on YouTube at https://www.youtube.com/@theairisknetwork.
Subscribe to our Substack for more content: substack.com/@theairisknetwork
The AI Risk Network team