John Sherman and experts discuss Jensen Huang’s dismissive AI stance, the arrival of AGI-level coding tools, and how community action is blocking unsafe AI growth
According to host John Sherman, the outlook for 2026 is clear: laws and treaties alone will not be enough to mitigate the catastrophic risks of artificial intelligence. The primary "winning state" for humanity is one where unsafe AI becomes bad for business, driven by a public that refuses to accept unmanaged risk.
During the discussion, the guests analyzed recent comments from Jensen Huang, CEO of Nvidia. The panel noted that Huang appears dismissive of core AI risk ideas, framing AI merely as the "next step in computing" rather than a potential existential threat. Lon Shapiro argued that experts focused solely on hardware may suffer from a lack of "intellynamics"—failing to grasp what happens when intelligence surpasses human levels.
Interestingly, the discussion highlighted a shift in tone for the Nvidia CEO. In past comments, Huang emphasized the necessity of keeping a "human in the loop" and suggested that an AI’s ability to self-learn and change in the wild should be avoided. Today, experts warn that such caution has been sidelined by the $3 trillion market cap race.
While industry leaders claim AGI is far away, some influential researchers argue that tools like Claude Code represent a near-human level of general intelligence. Michael of Lethal Intelligence noted that these tools feel like "magic" to those using them, creating a massive public awareness gap between AI insiders and the general population.
One of the most promising "warning shots" discussed was the rise of community opposition to data centers. In 2025, nearly $100 billion in data center projects were blocked by local communities. The guests suggested that this proves everyday people have more agency than they realize. By using local rules, petitions, and town hall meetings, citizens are successfully throwing "sand in the gears" of unsafe AI expansion.
The guest argues that if the public remains oblivious, the "game is over". However, if parents and citizens lead the charge, we can impose sanity on corporate agendas.
The AI Risk Network team