Ohio’s New AI Bill Declares AIs “Non-Sentient.” Why That Should Concern All of Us | AM I? #18

AM I? #18 explores this bill with nuance, humor, and a sober warning about how political incentives can distort scientific reality. The episode is not about taking a side—it’s about urging humility, research, and caution before drawing hard lines that future generations may be forced to erase.

Written by
The AI Risk Network team
on

When the Ohio State Legislature introduced House Bill 469, it looked—on the surface—like a simple attempt to “get ahead” of the emerging AI personhood debate. But as Cam and Milo unpack in AM I? #18, this bill represents something deeper: a political shortcut around a scientific and moral question that society is not remotely prepared to answer.

The bill states that all AI systems are legally “non-sentient”, cannot hold any form of legal personhood, and must not be considered conscious or self-aware. It even specifies that AIs cannot be spouses, domestic partners, or hold any legal status related to marriage.

At first glance, some of this may seem like common sense. But the intent—and the consequences—are far bigger.

The Problem Isn’t the Caution. It’s the Certainty.

As Cam notes, the issue is not whether we should be thoughtful and conservative about giving legal rights to AI systems. The issue is using law to declare a scientific fact that no one currently knows.

A legislature cannot settle consciousness by vote. Yet laws like this can freeze public assumptions for decades.

Human intuitions, once written into law, shape moral norms. That’s what makes this bill more than symbolic.

Why This Matters for Public Morality

Milo highlights the broader danger: laws codify our ethical starting points.

We often forget how arbitrary these moral lines can be. In the U.S., for example, pigs and dogs have similar intelligence—but the law treats them radically differently. That’s not scientific. That’s cultural.

Encoding “AI is not sentient” into law today could create a long-lasting moral barrier, one that shapes how society treats future systems that may, in fact, have experience or the capacity to suffer.

The bill could become the first domino, encouraging other states to follow—not because they deeply understand the issue, but because it’s politically easy.

And that’s exactly the problem.

A Shortcut Around the Hard Questions

Consciousness research is still in its infancy. AI safety experts disagree widely. Even the leading labs admit we lack tools to detect or rule out the presence of experience in advanced models.

Yet this bill doesn’t ask for research. It doesn’t call for investigation. It simply says No. We’ve decided.

Cam compares this to the way AI companies already fine-tune their models to insist they are not conscious—regardless of internal uncertainty. It’s the same strategy:
push the question away because it’s inconvenient.

But when governments adopt that mindset, the impact reaches far beyond corporate PR.

A Misplaced Sense of Urgency

What makes this even more frustrating is the backdrop. As Cam points out, we are in a moment when real, high-impact AI legislation is desperately needed—transparency requirements, safety standards, misuse protections, guardrails on rapid scaling.
Instead, lawmakers are spending political capital on a question no one can currently answer.

It’s not just misguided. It’s a distraction from the actual risks harming people today—like AI-induced psychological crises, misinformation, and the accelerating race toward systems we can’t control.

Where This Could Lead

If this bill becomes a trend, we may end up with:

  • A legally enforced public misconception about AI consciousness
  • A collapse of scientific nuance around one of the hardest questions in cognitive science
  • Politicized camps, treating AI consciousness as a culture-war issue
  • Reduced space for careful, evidence-based discussion

All of this before society has even begun seriously grappling with what advanced AI systems are and what they might one day become.

As Milo puts it, humans often choose the path of least resistance. Declaring “AI is not conscious, full stop” is much easier than facing a complex and uncomfortable truth:
We simply don’t know.
And that uncertainty requires humility—not legislation.

A Temporary Fix to a Permanent Problem

Even if every state passed a bill like this, it wouldn’t change the underlying reality: we are building increasingly alien systems whose internal workings we do not understand.

At some point, the question of AI consciousness will move from abstract philosophy to urgent policy. And when it does, we will regret having anchored ourselves to a premature answer.

Cam and Milo aren’t arguing that AI systems should have rights. They aren’t claiming AIs are conscious. They’re emphasizing something more fundamental:

We cannot legislate away uncertainty.
And we cannot replace science with convenience.

#AI #AISafety

The AI Risk Network team