"The Apocaloptimist" Director Daniel Roher on AI Risk, Sam Altman, and Why Public Action Matters | For Humanity #83

Documentary filmmaker Daniel Roher joins John Sherman to discuss making "The Apocaloptimist," his frustrating interview with Sam Altman, and why collective action on AI risk is the only wrong thing to skip.

Written by
The AI Risk Network team
on
Apr 1, 2026

What happens when a filmmaker looks directly at AI risk - and decides to make a movie about it?

That is the question at the center of For Humanity #83, where host John Sherman sits down with Daniel Roher, Oscar-nominated documentary filmmaker and director of The Apocaloptimist - a new feature-length documentary designed as what Roher calls "a first date with AI" for the millions of people who have heard about artificial intelligence but haven't yet sat down to understand what it means for their future.

Roher didn't come into this project as a technologist. He came in as a storyteller. And what he found along the way - from sitting across from Sam Altman to reconciling the contradictory brilliance of Eliezer Yudkowsky and Peter Diamandis - left him questioning things he never expected to question.

This conversation is raw, funny, and deeply honest. Here are the key insights.

Making an AI risk film felt "like a suicide run"

According to Roher, making The Apocaloptimist was one of the most difficult creative experiences of his career. He describes it as a nearly impossible task: introduce what AI is, explain why it is exciting, explain why it is terrifying, suggest what might need to happen to mitigate the risks, and frame it all in a way that resonates with people who have never thought about this before.

Roher notes that every viewer seems to find something to praise and something to critique - but rarely the same things. He suggests this is the nature of the subject itself. There is no version of this film that makes everyone walk away satisfied, especially those who have been following AI closely. But for its intended audience - people encountering this issue for the first time - Roher says the film appears to be doing exactly what he designed it to do: making people care.

Interviewing Sam Altman produced nothing but polished talking points

One of the most compelling segments of the conversation centers on Roher's experience sitting across from OpenAI CEO Sam Altman. Roher describes what he calls an "energetic misalignment" between them - a sense that Altman was calculating every response in real time, offering nothing beyond carefully constructed lines.

Sherman adds his own perspective, suggesting that if he could ask Altman one question, it would be: what qualifies you to hold this responsibility? He predicts Altman would respond by saying no one is qualified - to which the natural follow-up becomes impossible to dodge: then why are you doing it?

Roher agrees. He describes conversations with AI leaders as fundamentally unsatisfying - not because they refuse to engage, but because there is "no interior life" to access. He suggests the real answers have to do with capitalization and control, and that the polished public-facing language is designed to obscure that.

Sherman frames it simply: the "fake earnestness" that these leaders project shields what he sees as deeper evasion.

The apocaloptimist worldview: rejecting cynicism without ignoring danger

Roher lays out his interpretation of the film's title not as a binary choice between doom and utopia, but as a call for nuance. He argues that the same technology promising to cure diseases and desalinate water also empowers catastrophic risks - and that both realities must be held at the same time.

For Roher, being an apocaloptimist means rejecting what he describes as "the very easy and convenient cynicism and nihilism of this moment." He believes public pressure and collective action matter, and that disengaging is the only truly wrong response.

He draws a parallel to the development of nuclear weapons, arguing that AI demands a similar international institutional response. Not a shutdown - but a framework that empowers the good while creating real accountability for the dangerous.

Intelligence, curiosity, and the control problem

The conversation takes a philosophical turn when Sherman presents his theory that curiosity sits at the core of all intelligence. He argues that a system far smarter than humans would never tolerate being controlled by slower, less capable beings - comparing the dynamic to humans building highways on top of ant colonies.

Roher pushes back thoughtfully. He suggests that if a system were truly that intelligent, it could just as easily become a benevolent guide - what some accelerationists call "the village elder." He acknowledges both scenarios are speculative and admits he prefers to focus on practical, near-term governance rather than long-range hypotheticals.

It is a tension that runs through the entire AI risk conversation: those who think about the future in terms of what could go wrong, and those who think about the present in terms of what can be done right now. Both perspectives show up in this episode, and neither is dismissed.

How do you live with an 80% P(doom)?

Perhaps the most human moment comes at the end. Sherman shares that his personal probability of catastrophic AI outcomes sits between 75% and 80%, on a timeline of two to five years. Roher is visibly surprised.

But when Roher asks how Sherman stays regulated - how he doesn't just collapse under the weight of it - Sherman's answer is unexpected. He says he is in the best mental state he has ever been in. By letting go of long-term expectations, he has found a kind of freedom in being fully present - appreciating the water on the Baltimore Harbor, the sun, the simple experience of being alive.

It is a moment that captures something essential about the people working in this space. They are not nihilists. They see a genuine risk. And they choose to act anyway - not because they are certain they will succeed, but because they believe the only wrong thing to do is nothing.

Where this leaves us

Roher's film is designed as a starting point - a way to bring new people into this conversation. Sherman's show, For Humanity, is designed to keep the conversation going. Together, they make the case that awareness without action is incomplete, and that the window for meaningful public engagement is still open.

The question is whether enough people will walk through it.

Watch the full episode:

Take action on AI safety: https://safe.ai/act

The AI Risk Network team