The Chess Cliff, Trump's AI Power Grab, and Public Resistance

In Warning Shots #22, hosts discuss Trump’s executive order to preempt state AI laws, the "chess cliff" of progress, and why public rejection of AI ads matters. For more information on AI regulation

Written by
The AI Risk Network team
on

Warning Shots #22: The Chess Cliff and the Fight for Human Agency

If it’s Sunday, it’s Warning Shots. This week, John Sherman (host of For Humanity), Michael (Lethal Intelligence), and Lon (Doom Debates) tackle five critical "warning shots" that signal a rapid shift in the AI landscape. From federal power grabs to the "chess cliff" of human obsolescence, the message is clear: the transition is happening faster than we are prepared for.

1. The Federal "Power Grab" for AI Regulation

A major point of concern this week is the Trump administration’s recent executive order aimed at "eliminating state law obstruction" of national AI policy. The order authorizes the federal government to sue states that implement their own AI regulations.

The guest and hosts argue this is a "power grab" that hands the keys of the future to unelected Silicon Valley barons. By bulldozing state-level checks - like New York’s deepfake ban or California’s bias audits - the federal government may be creating a "single point of control" that is far easier for tech titans to lobby. As Lon notes, states were often the "best hope" for meaningful regulation in the absence of federal action.

2. The "Chess Cliff": Why Linear Thinking Fails Us

The hosts discuss a powerful visualization of progress: the chess graph. For years, AI strength in chess rose steadily while human dominance remained flat at the top. It felt like "steady progress" until a sudden "switch flips".

In just a few years, human win rates went from near certainty to irrelevance. Experts warn that we are currently in this "linear trap" with other domains like law, medicine, and programming. We watch models get slightly better each quarter and tell ourselves we have time to adjust, but the "chess cliff" suggests that human competitiveness can vanish in the blink of an eye.

3. A Public Groundswell: The McDonald's AI Fail

Not all AI adoption is meeting open arms. McDonald’s recently pulled a generative AI Christmas ad after a massive public rejection. Despite the company stressing that a team of humans spent weeks refining the prompts, the audience found the "uncanny" result off-putting.

John Sherman highlights this as a "decisive" moment where the public rose up to say, "No, we don’t want that AI thing". This rejection suggests a potential limit to how much "simulated humanity" people are willing to accept in their culture.

4. Argentina’s "Civilizational Gamble"

In a move described as a "staggering concentration of power," Argentina recently announced plans to give Elon Musk’s AI, Grok, to all school kids. The hosts warn of the risks of "sycophantic AI" - models designed to be agreeable that might validate a child's every idea, even dangerous ones.

Handing the education of an entire generation to a single private entity breaks the "chain of knowledge" where humans teach humans. It creates a risk where, if the "algorithm" is shifted or the system is "unplugged," a generation may find themselves without the tools for independent thinking.

5. Google and the "Society Problem"

Finally, the episode highlights a recent statement from Google CEO Sundar Pichai regarding job disruptions. While acknowledging that AI will cause "major disruptions," Pichai stated that dealing with the fallout is the public's responsibility, not Google's.

The hosts frame this as "progress for the few, paid by the many". While tech giants celebrate the efficiency of automation, the "massive destruction of livelihoods" is treated as an external problem for regular people to solve on their own.

Taking Action

As we face the "video singularity" and the potential for total automation, the need for citizen-led pressure on regulators has never been higher. We must ensure that technological progress does not come at the cost of human agency and dignity.

To learn how you can support responsible AI regulation, visit https://safe.ai/act.

The AI Risk Network team