Understand the risk
Artificial intelligence isn’t just another technological breakthrough—it’s a force that could shape the fate of humanity.
Geoffrey Hinton, one of AI’s founding figures1, warns there’s a “10% to 20% chance that AI could cause human extinction within the next three decades.” Even if that number were far smaller, the stakes would still demand urgent action.
AI is advancing at an unprecedented pace, bringing enormous potential but also unprecedented risks—risks to our safety, our economies, our political systems, and even the survival of human culture itself.
Below, we break down four of the most urgent fronts where we must understand and address AI’s dangers.
Advanced AI systems are rapidly developing the ability to learn, adapt, and make decisions in ways humans can’t predict or control. Once AI reaches a level of superhuman intelligence, it could outthink and outmaneuver any safeguards we try to put in place. Without strong safety guardrails, an AI system pursuing goals misaligned with human values could cause widespread harm, potentially threatening our very survival. The stakes aren’t just about misaligned software, they’re about ensuring the continued existence of humanity — and our civilization.
AI isn’t just automating tasks. It is capable of transforming, or even destroying, entire industries overnight. From finance and manufacturing to knowledge jobs and creative fields. AI-driven disruption can displace millions of jobs and cause economic chaos before societies have time to adapt. A poorly managed transition could lead to deep inequality, market collapse, and global instability. Economic systems built over centuries could unravel in years, or even months, if we don’t plan for AI’s rapid impact.
Powerful AI systems can already generate persuasive fake news, influence public opinion, and manipulate political outcomes. Left unchecked, they could undermine our trust in truth itself and erode the foundations of democracy. Automated disinformation campaigns, AI-assisted surveillance, and the ability to micro-target voters are just the ingredients that bad actors are looking for. The question isn’t just “What can AI do?” but “Who controls it?” and “Who really benefits from it?” Without transparency and ethical boundaries, democracy could be reshaped by those with malicious intent or by machine agendas rather than humans.
Human values, experiences, and shared traditions have been shaped over thousands of years. While these are invaluable to our civilization, they are not something that AI inherently values. If superintelligent AI develops goals that conflict with humanity’s, it could alter and eventually overwrite our values entirely. We risk AI influencing our decisions, shaping our beliefs and goals, and eventually even replacing creative human output with machine-made alternatives. Over time, humanity could lose its cultural compass… and who we are. Protecting our cultural survival means ensuring AI supports human flourishing, rather than overwriting our very humanity.