What We Lose When AI Makes Choices for Us | For Humanity 76

Author Jacob Ward joins John Sherman to discuss how AI exploits our "fast thinking" brain and the risk of a "decisional extinction" in the age of algorithms.

Written by
The AI Risk Network team
on

The Decisional Extinction: What We Lose When AI Makes Our Choices

In the latest episode of the For Humanity podcast, host John Sherman sits down with journalist and author Jacob Ward to discuss a profound and often overlooked risk of artificial intelligence: the erosion of human decision-making. Ward, author of The Loop, argues that while the public often focuses on distant extinction scenarios, a more immediate "extinction" is already underway—the loss of our ability to make good decisions for ourselves.

The Google Maps Effect on the Human Brain

The guest suggests that AI's impact on our decision-making follows a pattern similar to what Google Maps did to our sense of direction. According to the discussion, our brains are "energy conservation machines" that naturally seek to offload cognitively demanding tasks.

Ward explains this using the "dual process theory" of the brain:

  • The Fast Thinking Brain: An ancient, instinctive system millions of years old that we share with primates. It operates on autopilot, much like when we drive a familiar route without conscious memory of the trip.
  • The Slow Thinking Brain: A newer, rational, and creative system that is "glitchy" and requires significant effort.

The guest argues that AI systems are designed to play directly to our "fast thinking" brain, encouraging us to remain on autopilot indefinitely. This poses a significant risk to our ability to be the best versions of ourselves.

Why This Isn't Just Another "Calculator"

While some proponents of AI compare the technology to the introduction of the calculator or the printing press, Ward suggests these historical references may be misleading. Unlike a calculator, which processes raw data to reach a neutral truth, AI systems are "vacuuming up all of our weirdness, all of our biases" and regurgitating them as authority.

The guest warns that because these systems are built on profit-driven incentive structures, they are designed to maximize engagement by appealing to our most impulsive, instinctive choices.

The Problem with "Universal" AI Values

A central challenge in AI risk is the alignment problem—ensuring AI shares human values. However, Ward argues that building a general intelligence that shares all human values is "unrealistic" because human values themselves are deeply relative and often contradictory.

He shares an anecdote regarding early researchers at OpenAI who grappled with the "heroine problem": if a perfect AI assistant is designed to anticipate a user's needs, and that user is addicted to heroin, should the AI help facilitate the addiction or intervene against the user's immediate wishes? This dilemma highlights the difficulty of programming a machine to navigate the complexities of human morality.

Conclusion: Fighting for Agency

The conversation concludes with a call to recognize that the way AI is deployed is a business choice, not an inevitability. Experts warn that as AI becomes more persuasive and integrated into our lives, maintaining our "slow thinking" agency will require conscious effort and rigorous regulation.


Act now: https://safe.ai/act

The AI Risk Network team