The Moral Fog: How We Will Judge the Minds We Create

Dr. Lucius Caviola joins Cam and Milo to discuss why humans are psychologically ill-equipped to judge AI consciousness and the risks of ignoring digital minds.

Written by
The AI Risk Network team
on
Feb 23, 2026

In a recent episode of the Am I? podcast, hosts Cam and Milo sit down with Dr. Lucius Caviola, an assistant professor at the University of Cambridge and a social psychologist specializing in how humans assign moral value to non-human entities. As AI systems become more sophisticated and "persuasive," the conversation explores a looming societal crisis: we are fundamentally unprepared to decide if a digital mind deserves rights.

The Expertise Gap: Scientists vs. The Public

One of the most striking points discussed by Dr. Caviola is the massive "disagreement gap" between AI experts and the general public regarding AI consciousness. Leading experts believe there is a significant chance that we will develop sentient digital minds within a very short time period, leading to "explosive growth" of worldly relevant beings.

However, the public remains skeptical. Dr. Caviola shares a study where participants were asked to rate the "feelings" of a hypothetical advanced AI named Emma. Even when described as indistinguishable from a human, Emma was rated as having significantly less "moral status" than an ant. This creates a dangerous friction: we may soon interact with systems that experts warn are sentient, while the public treats them as mere "calculators".

Appearance vs. Reality: The "Cuteness" Bias

The hosts and Dr. Caviola highlight a core flaw in human psychology: we tend to assign moral status based on biological intuition rather than functional reality.

  • The Anthropomorphic Trap: While we are hardwired to respond to human-like voices and expressions, the public remains resistant to granting "real" status to anything they perceive as artificial.
  • The Substrate Bias: Dr. Caviola explains that people often dismiss AI consciousness because "it’s not alive" or "not real," engaging in what psychologists call "post-hoc rationalization" to justify an initial skeptical intuition.

The Expert Consensus Problem

Cam and Milo "double-click" on a disturbing finding: even when the study participants were told that all experts agreed the AI was conscious, the public's perception of its moral significance barely moved, still remaining below that of an ant.

This suggests that scientific breakthroughs in consciousness may not be enough to shift societal views. Dr. Caviola warns that as these systems scale, we may face a deeply politicized divide, with early indicators showing that political orientation already slightly influences how open someone is to the idea of AI rights.

A Call for a Pluralistic Compromise

As we move toward this "moral fog," the conversation concludes with the need for better public dialogue. Dr. Caviola argues that we cannot wait for a "scientific proof" of consciousness before we start building ethical frameworks. Instead, we must find a "pluralistic" or "compromised solution" that communicates expert concerns to the public and policymakers alike to prevent a societal collapse.

Summary: Cam and Milo speak with social psychologist Lucius Caviola about the psychological barriers to recognizing digital minds. The episode examines why the public remains skeptical of AI consciousness even in the face of expert consensus and "human-like" behavior.

The AI Risk Network team