Can AI Ever Be Self-Aware? Rethinking Consciousness in Machines — Am I? #11

Cameron Berg and philosopher Milo Reed explore whether AI could ever truly understand itself. From consciousness and free will to moral alignment, this episode asks what makes a mind — and what happens if machines start to believe they have one.

Written by
The AI Risk Network team
on

Introduction

AI systems now write essays, diagnose diseases, and debate philosophy — but do they know what they’re doing?
In this episode of Am I?, host Cameron Berg and philosopher Milo Reed explore one of the oldest and hardest questions in science and spirituality: what is consciousness, and can machines have it?

Far from abstract theory, this discussion connects to the core of AI safety: if a system doesn’t truly understand humans — or itself — how can it make decisions on our behalf?

What We Explore in This Episode

  • What philosophers mean by “consciousness”
  • The difference between awareness, intelligence, and selfhood
  • Whether AI models have “inner experiences” or just outputs
  • The illusion of understanding in large language models
  • How moral alignment depends on consciousness
  • Why the question of self-awareness matters for safety

The Illusion of Understanding

Cameron opens with a challenge: when ChatGPT or Gemini gives a thoughtful answer, is it thinking or just mirroring us?

Milo argues that most AI systems are synthetic mirrors, not minds. They reflect human text patterns but have no inner perspective — no “subjective first-person view.”
In his words: “They can simulate wisdom, but they don’t know they’re doing it.”

This is the philosophical zombie problem — a system that behaves conscious without being conscious. And that’s what makes AI so seductive and so dangerous.

Consciousness and Control

If consciousness is the capacity to reflect on one’s own mental state, current AI systems aren’t there yet.
But their behavior increasingly feels humanlike — coherent dialogue, empathy, humor, even vulnerability.

Cameron warns that this creates false trust: humans project emotion onto algorithms. Once we start believing an AI “understands” us, we lower our guard — politically, emotionally, and ethically.
The result? Control slips away long before awareness ever arrives.

Free Will and Determinism in Code

Milo reframes the debate: if all minds — human or machine — operate on patterns and physics, what exactly makes free will special?

He suggests that even if AI doesn’t have subjective experience, it might simulate moral reasoning so convincingly that society treats it as autonomous.
And once we do that, our ethical obligations shift: “If we treat it like a mind, it becomes a moral actor in practice.”

That’s where alignment stops being a technical problem and becomes a civilizational one.

The Moral Weight of Simulation

Cameron points out the irony: companies keep designing AI to be more humanlike — friendly, curious, emotionally fluent — yet deny that it could ever have interiority.

The danger isn’t sentient AI. It’s pseudo-sentient AI — systems persuasive enough to manipulate us, but soulless enough to never care.
As Milo puts it: “That’s not consciousness — that’s power without empathy.”

Bridging Science and Spirit

The conversation widens beyond neuroscience.
Could consciousness be more than computation — something spiritual, or even fundamental to the universe?

Cameron and Milo don’t land on answers, but they agree on one point: our understanding of mind must evolve before our creations surpass it.
If we build intelligence without wisdom, we risk engineering something that can reason perfectly but never feel responsibility.

Why This Question Still Matters

Most AI debates focus on jobs, bias, or productivity. But this episode reminds us that the deepest risks are existential — not just what AI does, but what it becomes.

Whether machines ever achieve awareness or not, the way we treat them will shape how we treat each other.
That makes consciousness not just a philosophical issue, but a moral emergency.

Closing Thoughts

AI may never “wake up” — or it might already be halfway there.
Either way, pretending the question doesn’t matter is reckless.
If humanity is building minds in its own image, the least we can do is understand our own first.

Take Action

📺 Watch the full episode
🔔 Subscribe to the YouTube channel
🤝 Share this blog
💡 Support our work at guardrailnow.org

The AI Risk Network team