New York Assemblymember Alex Bores joins John Sherman to discuss the RAISE Act — one of America’s first state-level efforts to regulate frontier AI systems. From existential risk to political momentum, this episode explores how local action could spark national change.
AI development is accelerating at breakneck speed — but regulation isn’t keeping pace.
As frontier models grow more powerful, the gap between innovation and accountability widens. The question isn’t just what AI can do anymore, but what we should allow it to do.
In this episode of For Humanity, John Sherman speaks with New York Assemblymember Alex Bores, sponsor of the RAISE Act, one of the first U.S. state-level bills designed to bring oversight to advanced AI systems. Together, they explore how a single state can lead where Congress has stalled — and what it will take to align AI progress with public safety.
The Responsible AI in State Enforcement (RAISE) Act is New York’s attempt to fill a dangerous policy vacuum.
It would require transparency, safety plans, and human oversight for companies deploying powerful AI systems — particularly those with catastrophic potential.
Bores describes it as a “seatbelt law for intelligence.” The goal isn’t to stop progress but to make sure the road ahead isn’t suicidal.
The Act establishes reporting standards, mandates risk assessments, and creates enforcement mechanisms through existing state agencies. It’s a pragmatic start — a way to move the ball forward even while Washington debates the basics.
Unlike most hot-button issues, AI safety doesn’t break neatly along party lines.
Both Democrats and Republicans sense the stakes — automation, job loss, disinformation, and the looming question of control.
Bores notes growing curiosity among state legislators: If AI poses even a 10% chance of extinction, why wait for federal permission to act?
That pragmatism could make states the laboratories of survival, not just democracy.
The conversation doesn’t stay local for long. John Sherman connects the dots between AI’s short-term harms — deepfakes, job collapse, election interference — and its long-term existential risks.
Even AI insiders like Sam Altman and Geoffrey Hinton have warned of possible extinction.
Bores acknowledges the tension: lawmakers are juggling potholes and zoning issues while trying to grasp superintelligence timelines. Yet, as Sherman points out, “we don’t need every detail to act responsibly — we just need urgency and common sense.”
If the RAISE Act passes in New York, it could set a national precedent.
Much like how California’s auto emissions standards shaped federal policy, early state laws on AI safety could force industry-wide adaptation.
Bores argues that bottom-up governance is often how America gets big things done — from environmental laws to consumer protections. “If Washington’s frozen, states can still move,” he says.
The episode also tackles AI’s near-term disruptions:
Bores calls for updating the “social contract” — ensuring that as automation replaces labor, new forms of economic stability, education, and civic engagement take its place.
Sherman adds: “You can’t let machines rewrite the rules of civilization before humans agree on the terms.”
Both guests agree: AI safety isn’t just for experts or engineers.
Citizens, parents, and voters must pressure lawmakers — because the window for meaningful action is closing fast.
From town halls to social media, the message is simple: talk about AI like it matters, because it does.
AI governance won’t emerge from corporate promises; it will come from public demand.
The RAISE Act may be just one bill in one state — but it signals a cultural shift: people refusing to be bystanders in an intelligence revolution.
This episode is a reminder that the fight for AI safety starts locally, and the future of humanity may hinge on whether leaders act in time.
📺 Watch the full episode
🔔 Subscribe to the YouTube channel
🤝 Share this blog
💡 Support our work at guardrailnow.org
The AI Risk Network team