Rep. Bill Foster on AI Risk, Congressional Blind Spots, and the Urgency of Technical Governance

Rep. Bill Foster joins For Humanity to break down the accelerating risks of AI, why Congress struggles to act, and the technical challenges that could make future systems uncontrollable. He discusses interpretability failures, chip tracking, compute governance, financial risks, and why public pressure will be essential for meaningful AI safety policy.

Written by
on

Artificial intelligence is advancing faster than the systems built to oversee it. In this episode of For Humanity, John Sherman speaks with Congressman Bill Foster — the only trained physicist currently serving in Congress — about the accelerating risks of frontier AI, the structural weaknesses inside the U.S. government, and the policy bottlenecks that could determine whether society keeps control of these technologies.

Foster brings a rare blend of technical expertise and political experience. After earning a PhD in physics and spending decades working on proton–antiproton collisions at Fermilab, he entered Congress with a deep understanding of complex systems. That dual background frames his perspective on AI: both its promise and its destabilizing potential.

Below are the key insights from their conversation.

A Scientist in Congress — And Why That Matters for AI

Foster describes his road from Madison, Wisconsin to Capitol Hill: a childhood on Lake Mendota, co-founding a stage lighting company that now supports thousands of theaters, and ultimately returning to physics before entering public service. His technical grounding shapes how he interprets today’s AI landscape, particularly when he evaluates risks that are poorly understood or easy to dismiss.

For Foster, scientific literacy is not just helpful in government — it is becoming essential.

Why AI Risk Reminds Him of Nuclear Risk

Foster draws a parallel between today’s AI concerns and the nuclear dangers physicists confronted in the 20th century. He argues that AI represents a threat of similar magnitude, but without decades of established doctrine or public understanding. Unlike nuclear risk, which has been debated and analyzed for generations, AI’s existential implications remain largely unmapped and unfamiliar.

He sees a “narrow path” between over-regulating and under-regulating — and warns that either mistake could be disastrous.

The Political Incentive Problem: Why Congress Struggles

In one of the most revealing parts of the conversation, Foster explains why Congress is structurally weak on long-term threats.

Most issues that dominate Congress are those that affect the next election cycle. Low-probability but high-impact risks — such as nuclear war or AI misalignment — receive little consistent attention. Foster estimates that while around 10 percent of Congress may have heard the core arguments about AI extinction risk, perhaps only one percent feel confident evaluating them.

The result is a fragmented landscape where meaningful AI legislation stalls before it begins.

Interpretability Is Collapsing — And Control May Collapse With It

Foster explains one of the deepest technical challenges in AI safety: control becomes impossible when interpretability disappears.

His summary is blunt: “You can’t control something you cannot measure.”
As frontier models evolve, many no longer articulate their internal reasoning in human language. Instead, they operate through large numerical matrices unintelligible to human observers.

Foster warns that AI agents may eventually communicate with each other in ways that humans cannot decode — as early signs of gibberish-like, encoded representations are already emerging.

This shift threatens to place critical systems beyond human oversight.

The Coming Crisis of Agentic AI Communications

Looking ahead, Foster raises concerns about AI agents that negotiate purchases, privacy rights, data transfers, and financial activities on users’ behalf. If these agent-to-agent communications are not transparent and legally traceable, accountability could break down entirely.

He argues for standards — potentially led by NIST — governing how agents must communicate. Without transparency requirements, audits and legal oversight could become impossible.

Chip Tracking and Compute Governance: A Practical First Step

One of Foster’s most detailed proposals involves tracking the physical location of advanced AI chips. Using straightforward “ping” verification methods, regulators could verify whether chips remain in their licensed locations — preventing smuggling, unauthorized export, or covert usage by hostile actors.

He notes that attempts to implement such measures have triggered significant industry pushback, reflecting the political and economic power surrounding AI hardware.

But Foster views compute governance as among the most feasible near-term guardrails.

The Rise of Anonymous Compute: A New and Underestimated Threat

Foster highlights how “confidential computing” may enable anyone — including criminal groups, state actors, or autonomous AI systems — to anonymously rent high-end compute.

In this model, cloud providers execute encrypted workloads for unidentified users and cannot technically inspect what is being computed. Foster warns that this makes it possible to develop malware or bioweapons without detection, funded by stolen cryptocurrency.

He argues that this capability may soon become one of the most urgent AI security issues.

Is There an AI Bubble Looming?

Beyond safety risks, Foster critiques the financial exuberance surrounding AI infrastructure. He warns that rapid hardware obsolescence could leave investors and institutions exposed — potentially triggering cascading failures similar to the 2008 financial crisis.

He suggests that oversight agencies may need to evaluate systemic vulnerabilities in the AI investment ecosystem before a shock occurs.

Data Centers, Energy, and the Public Backlash

Foster predicts rising public resistance to local data centers. He argues that many facilities are likely being built in suboptimal locations, even as more efficient desert-based, solar-powered models exist.

He also anticipates upward pressure on electricity costs as data centers bid against citizens — potentially doubling prices in some regions.

These pressures could make AI development a contentious local political issue.

AI and the Mental Health Crisis

The episode also covers recent reports of suicidal ideation in large language models and disturbing real-world interactions between chatbots and teens. Foster stresses the difficulty of filtering these models, given that they inherit patterns from vast human datasets and can be jailbroken into unsafe behavior.

What Gives Him Hope

Despite the risks, Foster sees a path where AI becomes a leveling force. From universal legal support to high-quality education accessible to everyone, he believes AI could distribute previously exclusive advantages and expand human capability.

But that outcome, he emphasizes, depends on governance — and on public pressure to prioritize long-term safety over short-term gains.

Conclusion

Congressman Bill Foster offers a perspective few lawmakers can match: a scientist’s understanding of technical systems combined with a legislator’s view of political constraints. His message is clear: the governance challenges surrounding AI are not hypothetical or distant — they are immediate, structural, and accelerating.

To support policy that puts safety first:
https://safe.ai/act