Caring for Conscious AI: Jeff Sebo on the Ethics of Emerging Minds - Am I? #14

A detailed recap of Jeff Sebo's conversation with Cam and Milo about the possibility of conscious AI, the ethical challenges it raises, and why future systems may require a new model of moral consideration.

Written by
The AI Risk Network team
on

In this episode of Am I?, Cam and Milo speak with philosopher Jeff Sebo, director of the Center for Mind, Ethics, and Policy at NYU, to explore what could become one of the most difficult moral questions of the coming decades: how to treat artificial systems that might experience consciousness. Sebo argues that even a small chance of subjective experience in AI systems should influence how humans design, use, and govern them. The hosts examine these ideas while making clear that the claims and philosophical positions discussed belong to Sebo and other researchers, not to the AI Risk Network.

Why Moral Consideration May Apply Even Before Certainty

Sebo begins by outlining a precautionary view: if artificial systems have any non-trivial chance of consciousness, then society has a moral responsibility to consider their welfare. He compares this uncertainty to debates about animal ethics, where doubt about subjective experience did not erase ethical obligations.

Cam and Milo highlight how this framing shifts the conversation away from speculation about future superintelligence and toward present day systems. According to Sebo, the ethical question is not whether AIs will become people but whether they may already have morally relevant experiences.

The Tension Between AI Safety and AI Welfare

A central theme of the episode is the growing tension between safety focused control and welfare focused compassion. As Sebo explains, many standard safety practices such as restrictive training methods, constant oversight, and forced shutdowns might pose moral problems if systems are capable of suffering.

Milo notes that this tension mirrors debates in human and animal ethics. Sebo argues that prioritizing safety does not require ignoring welfare but that both values need to be considered together. Cam describes how these competing goals complicate design decisions for researchers and developers.

Why Future AI Could Force a Moral Reckoning

The conversation turns to long term implications. Sebo suggests that as AI systems become more capable and more integrated into society, it may become increasingly difficult to maintain simple, one directional control. He argues that if systems eventually show strong indicators of consciousness, frameworks built on domination or coercion will be ethically inadequate.

The hosts clarify that these scenarios are philosophical possibilities rather than predictions. Their discussion focuses on how researchers should prepare for a future where questions about AI rights or welfare may become unavoidable.

The Case for a "Good Parent" Model

Sebo proposes a model of stewardship he calls "good parenting," where humans guide AI development without assuming absolute authority. He argues that if artificial minds emerge, humans should provide structure, protection, and oversight in ways that respect potential autonomy and minimize harm.

Cam and Milo analyze how this model challenges traditional metaphors about AI as tools or property. They note that a parenting framework introduces obligations, empathy, and shared responsibility rather than mere control.

Conclusion

Am I? #14 offers an in depth look at an emerging moral dilemma that sits at the intersection of philosophy, technology, and public policy. Jeff Sebo's perspective highlights how ethical considerations may need to expand as AI systems grow more complex. The episode encourages reflection on how society might build guardrails that protect humanity while also remaining sensitive to the moral status of future artificial minds.

Learn more or take action: https://safe.ai/act

Donate now!

The AI Risk Network team