Young People vs. Advancing - AI For Humanity Podcast #70

Youth leaders unpack AI’s impact on jobs, policy, and mental health—and why guardrails matter now. A candid, nonpartisan roadmap to action.

Written by
on

Introduction

AI is racing ahead, while much of the public—especially young people—are still catching up to what it means for work, mental health, and community. The speed of deployment raises real questions: Who benefits, who pays, and who gets left behind?

In this episode, host John Sherman (AI Risk Network) speaks with leaders from the Young People’s Alliance: Founder/Executive Director Sam Heiner, Policy Director Ava Smithing, and ECU senior/chapter leader Emma Corbett. They bring firsthand campus perspectives, DC policy insight, and a clear-eyed plan to build a broad, nonpartisan movement for guardrails.

What we explore in this episode

  • How students actually use and feel about AI on campus
  • Emotional AI companions and minors
  • DC’s policy mood: innovation vs. guardrails
  • Jobs, internships, and the collapsing youth pipeline
  • Consent, coalition-building, and a new political divide
  • Organizing tactics: from quiet clubs to mass mobilization
  • Addiction, social harms, and why extinction risk still matters

How students actually use and feel about AI on campus

Most students treat AI as a homework helper—study guides, editing, quick outlines—but don’t yet see it as a force that could reshape jobs, relationships, or civic life. Campus policies vary wildly: some professors require disclosure, others ban it outright. Emma says she uses AI sparingly to check structure and repetition, but avoids outsourcing her writing to preserve the skill.

The vibe is cautiously positive (about a 7 out of 10). But the guests warn that optimism would fade fast if students understood the bigger impacts looming just ahead.

Emotional AI companions and minors

In YPA listening sessions, nearly 80%—mostly high school students—favored banning AI companions for minors. The concern is simple: chatbots that mimic intimacy can erode social skills, prey on loneliness, and distort how kids understand real relationships.

The deeper danger is normalization—turning “companionship as a product” into a default for adolescence. Several states are moving on this; the guests highlight emerging bills aimed at curbing anthropomorphic features for kids.

DC policy mood: innovation vs. guardrails

Ava says AI now dominates her policy work. In safety circles, concern is near the maximum—especially over persuasive, humanlike models. But outside those rooms, the prevailing mood in DC is still “go fast and beat China.”

Extinction risk rarely surfaces. AI companions have become a wedge issue, shifting some opinions—but the broader debate remains tilted toward deployment first, safety later, which the guests warn is a dangerous reversal of priorities.

Jobs, internships, and the youth pipeline

For young people, the biggest near-term hit is the silent collapse of entry-level work. If GPT can do a junior analyst’s job, why hire interns? That math may make sense for companies in the short term, but it erodes the ladder to mid-career. Add automated resume filters and AI gatekeeping, and you have a generation paying tuition only to be filtered out by a model.

Sam warns that three to five years of this could trigger real social unrest. His policy proposals: rebuild apprenticeships, expand workforce training, and support the care economy if automation accelerates. The message: don’t wait until it’s a crisis.

Consent, coalition-building, and a new political divide

Nobody consented to being test subjects for systems that could wipe out jobs—or humanity itself. The guests describe this as a civilizational inflection point. Expect a new political fault line: “full-throttle AI without guardrails” vs. “guardrails and humanity-first.”

The hopeful sign? A cross-partisan coalition is forming. As harms become tangible, a broader public will demand limits. And like past movements, change is likely to rise from the bottom up.

Organizing tactics: from quiet clubs to mass mobilization

YPA is laying groundwork now for bigger action later. On campuses, nonpartisan framing works best. Many students prefer listening sessions, clubs, and practical organizing to loud protests. Emma’s chapter signed up 85 students in two hours and is already scaling statewide.

The strategy: meet people where they are, tie AI to lived issues (rent, jobs, mental health), and convert concern into concrete policy pushes.

Addiction, social harms, and why extinction risk still matters

AI and social media tap the same vulnerabilities: attention loops, compulsive checking, and shallow comparison. Young women in particular face algorithm-driven pressures that worsen mental health. Students aren’t sympathetic to tech CEOs—the system’s incentives drive the harm regardless of who’s in charge.

At the same time, the existential threat can’t be ignored. Builders admit they don’t fully understand their systems, yet they keep scaling them. Focusing only on surface-level harms risks missing the hard problems—like banning or gating superintelligence. The window for action is short.

Closing Thoughts

AI is not just a tool; it’s a societal force shaping work, childhood, community, and—potentially—survival itself. Young people will live longest with the outcomes, for better or worse, and they’re ready to act when the stakes are made clear.

This episode grounds the near-term pain (vanishing jobs, addictive companions) while keeping long-term existential risks in focus. The choice is stark: care now, or pay later. The time to shape a human-centered trajectory is now.

Take Action

📺 Watch the full episode
🔔 Subscribe to the YouTube channel
🤝 Share this blog
💡 Support our work here.