Drawing the Line: Why Ohio is Legislating Against AI Personhood | AM I #19

Ohio Representative Thaddius J. Claget discusses House Bill 469, which aims to ban AI marriages and deny AI legal personhood to ensure human liability.

Written by
The AI Risk Network team
on

In this episode of the AI Risk Network, host John Sherman is joined by Ohio Representative Thaddius J. Claget, Chair of the House Technology and Innovation Committee. The discussion focuses on a landmark piece of legislation, House Bill 469, which represents one of the first serious attempts in the United States to legally define the boundary between humans and artificial intelligence.

Defining the Human-AI Boundary: House Bill 469

Representative Claget, a civil engineer by trade with a background in business and law, argues that our current legal system is unprepared for the "incongruities" created by AI. According to the discussion, House Bill 469 seeks to establish a firm legal baseline before AI becomes inextricably embedded in our banking and judicial systems.

The bill includes several specific and controversial prohibitions:

ProvisionLegal IntentBan on AI PersonhoodProhibits AI from being granted the status of a person or any legal personhood.Denial of SentienceFormally declares that AI cannot be considered to possess consciousness or self-awareness.Marriage ProhibitionsExplicitly bans marriages between humans and AI systems.Corporate RestrictionsProhibits AI from holding corporate roles or positions of authority.

The Problem of the "Liability Shield"

A primary concern raised by Representative Claget is the potential for AI to be used as a shield for criminal conduct. He suggests that if AI were granted even limited forms of personhood, it could allow developers or corporate actors to evade responsibility for the actions of their systems.

The representative draws a distinction between AI and corporate personhood:

  • The Corporate Precedent: Corporations are treated as "persons" so they can sue and be sued, but the law allows for "piercing the corporate veil" to hold human operators liable for wrongdoing.
  • The AI Risk: Without clear legislation, a "devious" programmer could use an AI to commit a crime, such as bank theft, and argue that the "autonomous" system is the one at fault.
  • Human Responsibility: Claget argues that liability must always rest with a human being, as only humans possess the "moral agency" recognized by Western law.

Legislating Consciousness: A Philosophical and Legal Battle

A significant portion of the interview explores the bill’s claim that AI systems do not—and cannot—possess consciousness or self-awareness. Host John Sherman and the co-hosts push back on this, questioning whether it is possible to legislate the "inner state" of a machine, especially as AI begins to fluently replicate human dialogue.

According to the representative, the legal declaration of non-sentience is a necessary "starting point." He argues that it does not matter if a machine appears to have human qualities or "animations"; the law must define it as a machine to ensure it is never treated as a moral agent capable of making choices between good and evil.

The guests discuss the "dog analogy," noting that while we recognize animals as having subjective experiences (moral patients), we do not grant them the same legal agency as humans. Representative Claget maintains that humans are unique because they are "created in the Imago Dei" (in the image of God) and possess a "breath" and moral agency that code can never replicate.

The "Y-Axis" of Computing Power

Representative Claget describes the current technological shift using an XY axis. While traditional computing moved linearly along the X-axis (zeros and ones), he argues that AI and quantum computing represent a vertical surge on the Y-axis of power.

The representative warns that as AI gains the ability to replicate human actions at scale, the law must act quickly. "Once they're embedded in our banking or legal system," he notes, "they're going to be extremely difficult to pull back out."

The AI Risk Network team