How to talk about AI extinction risk when the public isn't listening - new research challenges assumptions about messaging strategy in the AI safety movement.
The AI safety community faces an uncomfortable truth: the most important message about AI extinction risk may be the least effective way to reach people.
This insight comes from Philip Trippenbach, strategy director at Seismic Foundation, an organization of veteran advertising, marketing, and communications professionals who have turned their expertise toward one of the most critical problems humanity faces-getting the public to actually care about AI risk.
In this episode of For Humanity, Trippenbach discusses research that directly challenges how the movement has been communicating about superintelligence threats. The findings suggest the AI safety community has been operating under assumptions that don't match what public polling reveals.
The Extinction Risk Paradox
Here's the uncomfortable fact: extinction risk comes in dead last when researchers ask people what they want to talk about regarding AI harms.
Seismic Foundation conducted a large public survey called "On the Razor's Edge" last summer, asking participants about 25 different AI-related issues. They measured not just concern levels, but salience-how much people actually prioritize these topics compared to everything else they care about.
According to Trippenbach, across all demographic groups-different ages, genders, political affiliations-extinction risk ranks 24th out of 25 in terms of priority. The only issue that polls lower is something they didn't measure.
This is the core tension: the AI safety movement has been leading with its most important message while research suggests that's not the message people are receptive to receiving.
"The case that we have is not 'can we make people pay attention to AI,'" Trippenbach explains in the episode. "People are paying attention to AI. We don't need to make people worried about AI. People are worried about AI. But what we need is we need to bring people to a place where they consider it of sufficient importance to outrank other things that they care about."
Why the Advertising Industry Should Run Communications
One of Trippenbach's central arguments is that the AI safety movement has historically excluded exactly the people who should be running its communications strategy.
"It's been a point of frustration for me," John Sherman notes, "that the people running AI safety communications are not ad people or strategic communications people at all. They are physicists, cosmologists, and academic researchers generally."
Trippenbach compares this to asking ad people to do physics research. "Do you know how much physics or cosmology or research you'd get done if you had ad people doing that work?" he asks. "So why would you have public engagement and communication strategy done by researchers and academics?"
The irony is compounded by an ideological commitment within parts of the effective altruism and rationalism movements to view advertising itself as impure, coercive, and false. According to Trippenbach, "If you don't believe in something, you are rarely any good at it."
This philosophical aversion to advertising has created a handicap. The movements most concerned with communicating AI risk have actively rejected the tools and expertise that could make that communication effective.
The Real Problem: Salience, Not Awareness
The research reveals a gap between concern and priority. Majorities of people across countries express worry about AI. They want it regulated. They're concerned about jobs and children.
But there's a difference between being worried about something and actually caring enough to change your behavior or influence policy.
Trippenbach measured this through the lens of "salience"-how high a topic ranks in people's personal priorities. In their survey, war and terrorism ranked at the top. AI? Twenty-fourth out of twenty-five.
This matters practically. In a midterm election year, when voters are deciding which five issues matter most, AI doesn't make the cut for most people. Legislators listen to constituent pressure. If constituents aren't making AI risk a priority, policy reflects that.
According to Trippenbach, tech companies currently spend about $400,000 per day on lobbying efforts in Washington, much of it fighting regulation. Without visible public demand, this anti-regulatory pressure faces no counterweight.
Google Trends Tells a Different Story
Interestingly, the research reveals that public attention is shifting rapidly in ways that weren't captured by Seismic's survey.
Trippenbach mentions that Google Trends data for searches combining "AI" and "jobs" increased tenfold between 2024 and 2025, and is on track to go higher in 2026. This suggests the salience issue is changing-though not necessarily in the direction the safety community would prefer.
The public is waking up to AI, but the concern that's gaining attention is economic impact and employment, not extinction risk.
Audience Segmentation Over One-Size-Fits-All Messaging
One of advertising's core lessons is that you cannot send the same message to everyone at all times. The AI safety movement has largely ignored this principle.
According to Trippenbach, effective communication requires understanding your audience and reaching them where they are. "Not everybody sees things the way we do," he notes. "You and I and most of the people listening have come into this issue in a particular way. But we are in a very small minority."
This means sophisticated "marketing orchestration." Some audiences have five seconds of attention. Others might engage for an hour. Some audiences care about jobs. Others worry about healthcare. Some haven't thought about AI at all yet.
The front-door, side-door, and back-door metaphor works: there are multiple legitimate ways into a conversation about a topic, and the path you choose depends on who you're trying to reach.
"You can't say the same message to all the people all the time," Trippenbach emphasizes. "If nothing else, because sometimes people have five seconds, or sometimes you've managed to interest them enough that now you can talk to them for an hour."
The Deeper Issue: Activation Over Awareness
The ultimate goal of AI safety communications isn't to make people aware of risk. It's to activate them-to move them from passive concern to active engagement.
According to Trippenbach, political science shows us that "unless there is sustained and visible public demand for a policy, policy tends to get made behind closed doors." In the current environment, those closed-door conversations favor deregulation.
Getting good policy enacted requires activating the public. But activation requires finding the right message for the right audience at the right time.
This is where the movement's philosophical aversion to advertising becomes genuinely costly. Professional communicators know how to segment audiences, test messages, and orchestrate campaigns. Researchers and academics, however brilliant, often do not.
What Gives Hope
Despite the research suggesting extinction risk messaging is ineffective, Trippenbach remains committed to building awareness.
The key insight is that awareness, regulation, and policy don't need to come from everyone. They need to come from enough people, organized effectively, creating enough sustained pressure to shift the political calculus.
The good news: more members of Congress are aware of AI extinction risk than most people realize. Public concern is growing. Data centers are becoming a local political issue as communities resist new construction.
The challenge: connecting that growing awareness to an effective public demand signal that reaches legislators and shapes policy.
The Path Forward
The research from Seismic Foundation suggests the AI safety movement needs to professionalize how it communicates. This means:
As Trippenbach puts it, effectiveness should be the measure. "The most important thing is that we are communicating to people to affect some sort of change, right? We want to change the way people think and ultimately we want to change the way people act."
If the AI safety movement wants policy change, it needs better communication. And that requires letting the experts in communication actually lead.
Learn More:Visit https://safe.ai/act to engage with AI safety efforts and stay informed about policy conversations happening right now.
The AI Risk Network team