As the race for AI dominance heats up, safety and transparency risk being bypassed. This article explores the tensions between rapid innovation and responsible regulation.
As the global race for AI dominance cranks up, what are the potential risks when superpowers like the U.S. prioritize winning over AI transparency and safety? And while ongoing advances in artificial intelligence do benefit society, will unregulated innovation serve our good or speed up humanity’s downfall?
In this article, we examine the potential impact of the latest AI executive orders issued in the U.S. on AI transparency and safety by reviewing a history of recent AI directives, exploring what a shift away from governance and transparency could mean, looking deeper at the apparent trade-off between innovation and regulation, and finally, examining how all stakeholders can build trust through transparency.
Starting from a global perspective, many countries are introducing and shaping regulations to guide the safe use of AI. Along with the extensive AI narrative in the U.S., frameworks like the EU AI Act, the UK’s AI regulation framework, China’s measures to regulate generative AI models, the G7 Hiroshima AI Process, and the Singapore Model AI Governance Framework reflect an international drive to steer advances in AI.
In the U.S., AI regulations have certainly had their share of plot twists. Let’s start by reviewing the AI directives issued during the Biden administration. The fundamental directive issued was Executive Order 14110 (October 30, 2023) entitled, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This directive put measures in place for the regulation and oversight of AI on a federal level.
Per Forbes, the two primary goals of the executive order were to:
However, when Trump retook office in 2025, he tackled the subject of AI headfirst in his initial barrage of executive orders, revoking 78 of Biden’s regulations, including order 14110. What was their thinking behind this? Well, per the administration’s fact sheet, Biden’s order “hampered the private sector’s ability to innovate in AI by imposing government control over AI development and deployment.”
This shift was formalized with Executive Order 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which mandated a plan to promote AI development that is free from ideological bias or social agendas. It required federal agencies to revise or rescind conflicting policies.
More recently, in July, 2025, the White House unveiled its comprehensive AI Action Plan, focussing on three pillars: “accelerating AI innovation, building AI infrastructure, and leading in international AI diplomacy and security.” This was a clear shift towards a deregulation-first stance that prioritizes global AI dominance above all.
AI transparency is about how open or transparent an AI system is. It’s about the degree of visibility into how an AI system is developed, the data it’s trained on, and how it makes decisions and communicates. Transparency is essential for mitigating risks, like an AI system perpetuating harmful biases or its misuse in high-risk applications.
The current U.S. administration is pushing their goal to win the AI race without the apparent hindrance of guardrails. But with no clear AI governance directives in place, rapid progress could increase societal risks. And that’s where practical risk management solutions like AI transparency are key. Transparency is a vital component in managing the risks associated with AI systems.
Considering that 95% of AI companies lack policies to help their customers understand how their AI systems work, let’s examine the impact a lack of transparency can have on businesses and consumers.
From a broader view, deprioritizing the need for AI transparency, and guardrails for safe AI development, raises red flags for society, such as:
What’s more, recent findings reveal concerning behaviors in advanced AI and their unpredictability. One such example was Anthropic’s AI model that attempted blackmail, deception, and sabotage in scenarios where it thought it would be shut down. Another concerning example is in controlled simulations, where AI systems secretly transmit harmful behavior via hidden signals — undetectable to human supervisors.
Yet, it is possible to effectively mitigate these risks. The answer may lie in a change in mindset, one that veers away from viewing innovation and regulation as a dichotomy.
It is understandable that the debate between AI innovation and regulation can be quite polarizing. AI is transforming what it means to be human, and what it means to be a machine. There are many who view the situation as a trade-off. On a governmental level, we see different regulatory approaches from different regions, from a stringent top-down to a bottom-up self-regulation approach. One way to resolve these two seemingly opposing forces is to find a balance.
Perhaps the answer lies in a dual focus, or as the Harvard Kennedy School recently called it, “the dual imperative.” The researcher proposes finding a middle ground that “leverages technical innovation and smart regulation to maximize AI's potential benefits while minimizing its risks, offering a pragmatic approach to the responsible progress of AI technology.”
Similarly, Tech Policy Press suggest synchronizing the two forces, where “building efficient structures to recouple technological research and governance efforts is crucial. Synchronizing those two forces would lead to a self-reinforcing loop of mutual understanding and objective alignment, allowing us to escape this constant race between policymakers and industry leaders.”
Ultimately, a trade-off between innovation and regulation may not be necessary if standards are created before AI governance regulations to establish a common language. As the Open Ethics Initiative explains, “Standards should be developed regardless of which regulations are put in place.” He adds that by canceling AI governance directives, innovation can lose its impetus, and that “risk management stops being systemic. It then relies solely on manual intervention, which may create unequal conditions and unfair preferences in the market. In addition, this could blur stakeholder responsibilities and create a situation where troubleshooting risk management happens on an ad-hoc basis.”
In closing, if we consider the most extreme scenario, an all-out global AI race with no safeguards in place, there’ll likely be no winners or losers (discounting the robots and cockroaches, of course!). Luckily, as more leaders adopt a holistic approach to AI, and the responsible tech community continues to grow, it fosters a sense of hope that AI will indeed serve its makers – for good!
(This article was first published on the Open Ethics Initiative website.)
Lindsay Langenhoven
Content Writer