John Sherman and the Warning Shots panel discuss the urgent need for an AI data center moratorium. The episode explores the risks of massive energy scaling, the lack of democratic consent in AI development, and Meta's recent pivot away from open-source AI.
In the latest episode of Warning Shots, host John Sherman is joined by Michael of Lethal Intelligence and Lon of Doom Debates to discuss a pivotal moment in the AI safety movement: the call for a moratorium on data center construction. As AI development accelerates, the panel explores whether pausing the physical infrastructure of AI is the most pragmatic lever left for human safety.
The discussion begins with Senator Bernie Sanders’ recent call for a moratorium on data center construction. While the Senator’s primary focus is often on the economic impact and potential unemployment caused by AI, the Warning Shots panel suggests a deeper, more urgent risk.
According to the discussion, the goal of a moratorium is not just about jobs—it is about creating a necessary delay. John Sherman argues that "delay" is one of the biggest goals for AI safety, providing more time for critical safety research and for democratic institutions to catch up to the technology’s speed. This echoes the framing in Sherman’s own work, emphasizing that AI is moving faster than Congress and that citizens must lead the call for regulation.
Michael provides a technical perspective on why data center scaling is so concerning. Currently, frontier AI models run on approximately 1 to 1.5 gigawatts (GW) of power—roughly the national consumption of a country like Belgium. However, industry plans are pushing to reach 50 GW within the next five years.
Experts warn that this 30- to 50-fold increase in energy scaling could represent a catastrophic jump in "raw intelligence":
A core theme of the episode is the lack of public consent in the development of these systems. The panel estimates that only about 10,000 to 20,000 people are directly working on the "frontier" of AI. According to the discussion, these individuals—and the "six unelected profiteers" leading the major labs—are making decisions that affect the lives of 8 billion people without their consent.
The guests argue that a moratorium on data centers is a "pragmatic lever" because it meets people where they are. While abstract extinction risks might not draw a crowd, the construction of a massive, resource-heavy data center in a local neighborhood triggers immediate public engagement and democratic oversight.
The episode also tackles Meta’s recent shift away from its "open source" crusade. Lon suggests that Meta's original commitment to open source was less about principle and more about a business strategy known as "commoditizing your compliment". By making AI technology a commodity, Meta sought to erode the profits of competitors. However, as the costs of staying at the frontier escalate toward a trillion dollars, Meta is now pivoting toward proprietary models to monetize its massive investments.
Whether through international treaties or local zoning debates, the panel concludes that triggering a national discussion on an AI moratorium is the number one point of leverage available today. The risks of harm are no longer theoretical—they are real, and the window for democratic intervention is closing.
The AI Risk Network team