.png)
A detailed breakdown of Esben Kran’s insights on AI-driven warfare in Ukraine, the risks of autonomous weapon escalation, and why international safety standards are falling behind military innovation.
In this episode of For Humanity, John Sherman speaks with Esben Kran, founder of Merit AI and an analyst focused on AI safety and military technology. Kran offers a detailed look into how autonomous drones, algorithmic targeting, and rapid battlefield automation are transforming modern conflict — particularly in Ukraine. The discussion explores why he believes current AI weapons trends pose serious long-term risks, how escalation dynamics can outpace human control, and why global safety standards remain far behind technological development.
Esben Kran begins by examining how Ukraine has become a real-time testbed for AI-enabled warfare. According to Kran, low-cost autonomous drones are now deployed at unprecedented scale: thousands of units coordinating strikes, identifying positions, and adapting faster than manual command systems can keep up.
While these tools can offer tactical advantages, Kran notes that the speed and autonomy they enable also reduce opportunities for human intervention. He emphasizes that real-world conflict demonstrates how quickly automated systems can amplify destruction once deployed at scale.
Kran highlights a central concern shared by many AI safety researchers: automated systems can escalate conflict unintentionally. He explains that machine-driven targeting models may misclassify threats or behave unpredictably under battlefield pressure. In environments where seconds matter, even small errors can trigger significant retaliation.
Kran warns that adversaries may also feel pressure to match or exceed autonomy levels used by opposing forces — creating a technological “race dynamic” where safety testing becomes secondary to military necessity.
Throughout the conversation, Kran points out that AI is shifting decision-making power away from humans and toward increasingly complex models. He argues that as these systems gain more operational authority, militaries may rely on tools they do not fully understand.
Kran stresses that this is not only a Ukrainian issue. Nations around the world are exploring automated battlefield systems, and the lessons from Ukraine may accelerate international adoption without equivalent investment in oversight.
Kran concludes by outlining why stronger international norms and verification mechanisms are needed. He believes that without enforceable standards, states will continue experimenting with increasingly autonomous weapons, each iteration more capable and less interpretable than the last.
His message echoes a broader theme in the AI safety community: once deployed widely, automated weapons may shift global security risks in ways that are extremely difficult to reverse.
Esben Kran’s perspective offers a clear and grounded look at how AI-enabled warfare is evolving in real time. While the technology brings tactical benefits, Kran argues that its rapid development — paired with minimal oversight — raises long-term global security concerns. His analysis underscores a growing need for transparency, international coordination, and public awareness as AI becomes more deeply integrated into military planning.
Learn more or take action: https://safe.ai/act