Liv Boeree joins John to explore the current moment in AI safety, public misunderstanding of extinction risk, the importance of mothers in the movement, the economy as a misaligned superintelligence, and what effective leadership on AI could look like.
In episode 74 of For Humanity, John sits down with Liv Boeree, the former professional poker champion turned AI safety advocate, for one of the most wide ranging and grounded conversations the series has had in months.
They dive deep into where AI safety stands right now, why the public remains skeptical of extinction risk, how mothers might become the most important force in the fight for guardrails, and why the economy itself may be our first example of a misaligned superintelligence.
Liv has been thinking about AI risk for nearly a decade, and her reflection is blunt: we are in a moment that feels simultaneously exhilarating, terrifying, frustrating, and sometimes boring. The public finally feels the impact of AI in their daily lives, and that awareness will only grow.
Some risks have not materialized as quickly as early advocates feared, but others have accelerated faster than anyone predicted. Liv notes:
Her reaction to this moment swings between optimism and alarm. Some days she feels like AI will solve everything, other days like the most irresponsible people are using the technology for the most irresponsible things.
John asks why extinction risk is still so hard for everyday people to engage with. Liv explains that the barrier often comes from a particular type of educated hubris.
Many everyday people intuitively get it. If you tell them something much smarter than you will be hard to control, they nod immediately.
But among experts and tech optimists, the idea that humans may not be at the top of the pyramid is almost taboo. Some insist superintelligent systems must be conscious to pose danger. Liv pushes back hard: consciousness is irrelevant. Social media algorithms already shape behavior without any type of sentience.
A recurring question in the safety community is whether to campaign on immediate harms like deepfakes, teen mental health, and job loss, or focus exclusively on extinction risk. Liv argues clearly:
Ignoring true, near term harms weakens the movement.
These harms give people a real world entry point into the idea of misalignment. They show that technology can diverge from human flourishing long before superintelligence arrives. And they open the door to explaining why losing control at higher levels could be catastrophic.
In one of the most striking parts of the conversation, Liv compares the modern global economy to a proto superintelligence.
We have already crossed seven of nine planetary boundaries. The economy is powerful, decentralized, and not aligned with human wellbeing. Solving this deep misalignment, Liv argues, may put us halfway toward solving the alignment problem for AI itself.
For Humanity has released 74 episodes. John notes that the thumbnails look overwhelmingly male. Liv agrees it is a problem, but not because the field needs a perfect 50 50 ratio. Instead, she highlights the unique strengths women bring:
Women see risks in human systems that men often miss. And on the public side, mothers in particular have unmatched instinct and urgency when something threatens their children.
Liv gives a simple strategy: meet them where they already feel pain.
Ask about:
Most mothers are already uneasy about these issues. From there, explain that some companies are intentionally designing more addictive and manipulative AI systems, including those aimed at children. The mama bear instinct does the rest.
John asks the spicy question: what if all the men leading AI labs were replaced with women?
Liv gives a nuanced response:
The core problem is not masculinity or femininity. It is a competitive structure that rewards power seeking behavior over caution.
Liv spent years as an elite poker professional. That background shapes her approach to AI in surprising ways. Poker players learn to:
These skills map directly onto AI governance. When tech leaders speak publicly, Liv often uses her poker mindset to ask: Is this person bluffing? Are they rationalizing competitive incentives? Are they revealing something unintentionally?
Liv is honest about the emotional weight of working on AI safety. She becomes more depressed the more she doomscrolls. She avoids living in all AI, all the time environments.
She keeps herself stable by:
Her advice mirrors the experience of many in the field: healthy people are effective people.
Liv does not believe in simple answers, but she names several high priorities:
The US and China are the only nation states capable of reaching superintelligence soon. If they share even a sliver of mutual concern about losing control, coordination is possible.
Especially those producing addictive or manipulative products targeting children.
Reward companies using AI to help people, not cheapen attention or extract value.
Liv argues surveillance dystopias are dangerously under discussed and may be far closer than people realize.
Liv closes with something unusual for an AI safety conversation: spirituality.
In the last few years, she has begun to sense that there may be wisdom beyond human understanding, something in the universe that wants humanity to survive if we listen closely. It does not remove responsibility, but it gives her a sense of purpose and calm in the storm.
John reflects on this with appreciation. The idea that something greater might want us to make it is both grounding and hopeful.
If you want leaders to take AI safety seriously, add your voice here: https://safe.ai/act. It is the fastest and most meaningful way to push for accountability.
The AI Risk Network team