Living with Near-AGI: Incentives, Agents & Healthy Use - AM I? #8

Near-AGI is creeping into daily life. This episode explores incentives, agents, risks, and healthy use — with blunt guidance and a timeline for what’s next.

Written by
on

AI is becoming weirdly capable — fast. The incentives driving it aren’t always aligned with us, and that combination has real consequences for your attention, your work, and society itself.

In this episode, our host offers a grounded take: how they personally use AI day-to-day, what already feels like “near-AGI,” and how to stay sane while the landscape shifts. Expect clear talk, not doom-bait.

What we explore in this episode

  • Rethinking your information diet with AI
  • LLMs as search — and the “grandma interface”
  • The weird corners of AI culture vs. scientific rigor
  • Social media as the first misaligned AI system
  • Near-term AGI and the next paradigm shift
  • Intelligence explosion & acceleration pressures
  • Healthy AI use, incentives, and the open questions

Rethinking the information diet with AI

Twitter/X warps attention through engagement incentives. Replacing that with AI as a lens over the web and research papers feels cleaner — but not frictionless. Boundaries matter. The host even stopped using AI for email and once dreamt of chatting with models. That says something about pull.

AI can be a sharper filter than infinite feeds if you use it with discipline. Treat it like a workbench, not a slot machine.

LLMs as search and the “grandma interface”

Today’s models often outperform Google for everyday tasks. The real magic? Natural language. It lets anyone — even your grandma — issue complex queries without knowing Boolean logic.

But power has edges. Stress-testing models is healthy, though not every psyche handles heavy exposure well. Know your limits; push them carefully.

The weird corners of AI culture vs. rigor

Communities are already cataloging model “aliveness” and recursive prompting oddities. It’s fascinating — and sometimes unhinged. The vibe can drift schizoid without data to back it up.

The host urges a centrist approach: stay curious, keep receipts, and always separate vibes from evidence.

Social media as misaligned AI (lessons for alignment)

Engagement algorithms radicalize because outrage is profitable. The Social Dilemma showed this clearly. Incentives shape outputs — and can corrode society.

For AI alignment, this is the sermon: if incentives aren’t tied to human flourishing, the system will optimize against us.

Near-term AGI and the next paradigm shift

Language models already feel “general.” One paradigm shift could tip us into human-level-plus capabilities — possibly within 2–4 years. The unlock may not be just scale but reasoning and agency.

Imagine agents that run your inbox, calendar, or workflows without hand-holding. That changes productivity — and identity — fast.

Intelligence explosion and acceleration pressures

I.J. Good’s feedback loop — AI designing better AI — could compress timelines drastically. Labs may see breakthroughs months before the public even notices.

Don’t count on a slowdown. Market forces are a tailwind, and LLMs already help researchers make novel discoveries. Progress wants to happen. The challenge is shaping it.

Healthy AI use, incentives, and the open questions

AI is helpful by default, but not designed for your flourishing. Watch for sycophancy, rabbit holes, and subtle rewiring. Push past default personas. Notice your prompting biases. Use AI to expand your boundaries — but dose it wisely.

For companies: maximize benefit, minimize harm. Hiding isn’t a strategy. Build awareness (shout-out to efforts like the AI Risk Network).

Open questions remain:

  • What’s the next paradigm — reasoning, agency, or something else?
  • Who controls agents, and under what constraints?
  • How do we align incentives with human thriving?
  • Will meaningful human–AI integration happen (via neurotech), and how?
  • What’s the “user manual” for healthy AI use?

Closing Thoughts

This moment is accelerating. Today’s tools are already reshaping work, attention, and identity. The leap to autonomous agents could land sooner than anyone expects. Incentives will decide whether that leap empowers us — or undermines us.

You’re in the blast radius: your time, your focus, your choices. Use AI intentionally. Demand better incentives. And help write the user manual for healthy human–AI coexistence.

Take Action

📺 Watch the full episode
🔔 Subscribe to the YouTube channel
🤝 Share this blog
💡 Support our work.