What happens when the biggest names in AI â like OpenAI and Google â urge Washington to ease up on regulation, just as fears of a fragmented, 50-state patchwork grow and Chinaâs DeepSeek makes global headlines? We may soon find out. In recent weeks, both public- and private-sector leaders have intensified efforts to accelerate U.S. AI development, spurred in part by a wave of comprehensive state-level regulations in places like California, Colorado, and Utah.
Among the key voices is OpenAI, whichâjust before the March 15 deadlineâsubmitted a detailed proposal in response to the White House Office of Science and Technology Policyâs (OSTP) call for input on a national AI action plan, alongside many thousands of other public comments. In its submission, OpenAI urged the government to reduce regulatory burdens, arguing that a more flexible approach would better position the U.S. to lead the global AI race.
AI innovation and regulation are colliding across bordersâwill the U.S. keep up?
Referred to as part of OpenAIâs âfreedom-focused policy proposalâ, OpenAIâs proposal calls for lessening existing regulatory burden while increasing voluntary collaboration between the industry and the federal government. According to OpenAI, such an approach would strengthen the nationâs lead on AI, as it would enable it to develop faster. In addition, the proposal urges the government to create a framework for the said voluntary partnership, while providing the private sector relief from proposed AI-related bills introduced in various US states. Such relief, according to the document, would serve to advance innovation and would prevent other players in the global AI race, especially China, from benefiting from regulatory arbitrage being created by individual American states.
The AI arms race is onâand so is the race to regulate it
While the global AI race might be a relatively new thing, the tension between innovation and regulation is not. Scholars have been discussing for years this tension, exploring if and to what extent regulation stifles innovation. And while innovation-related concerns frequently accompany regulatory proposals, they appear to resonate most powerfully in the context of data management and information privacyâhighly relevant aspects of AI governanceâwhere the balance between progress and protection remains deeply contested.
While it is essential to protect consumers, businesses, and institutions from the unintendedâand at times even intentionalâharms of emerging technologies, the mere possibility that regulation could slow innovation should not serve as a blanket excuse to reject reasonable safeguards. To be sure, overly burdensome regulation in fast-moving and evolving sectors like technology may hinder the pace of innovationâa dynamic often referred to as âRegLeg,â where regulation inevitably lags behind technological advancement. But that reality underscores the need for smarter, more adaptive governance and regulatory technology toolsânot the absence of those.
Compared to other types of technologies, in AI the regulatorsâ balancing task is even more delicate. For one, the safety risks associated with AI have been described as more alarming than other innovations. From existential risks to humanity, with âGodfather of AIâ Geoffrey Hinton warning that there is an up to 20% chance of this happening within 30 Years; through risks of having humans develop irreversible dependency on AI that would erode human abilities and expose them to manipulation by AI; to transforming the job market in unprecedented manner; to causing breaches of privacy or enabling biased and unfair decisions made by AI that would affect individuals in a various ways â AI today challenges regulators around the world in ways never considered in the past.
From Brussels to Beijing, global powers are rewriting the rules of the AI race
Dr. Karni Chagal-Feferkorn, an expert in comparative law and AI, explains that different jurisdictions are navigating the tension between protecting the public interest through regulation and maintaining a competitive edge in the global AI race. The European Union, she notes, is the first major power to implement comprehensive, binding AI regulations that apply to both European and non-European entities. This positions the EU as a global standard-setter and could give European companies an advantage, as their systems are already designed for compliance. However, the regulatory burden may also deter innovation within the EU and discourage foreign companies from engaging with the European market due to costly compliance obligations.
Building on this perspective, Jan Czarnocki, co-founder at White Bison, a Swiss-based consultancy for AI and compliance cautions that the EU AI Act has introduced significant uncertainty and is likely contributing to delays in the rollout of downstream AI systems. He attributes this, in part, to the challenge of translating complex technical AI concepts into clear, actionable legal normsâand then effectively communicating those norms to the engineers and developers responsible for implementation. The disconnect between legal and technical teams, each operating with different incentives and priorities, adds another layer of complexity. As legal professionals gain increasing authority over AI deployment decisions, new internal stakeholders are introduced, making the organizational process of adopting AI systems more cumbersome.
In contrast to the EUâs binding regulatory approach, Israelâoften dubbed the âstart-up nationââhas opted for a lighter-touch, voluntary framework aimed at fostering innovation. According to Josef Gedalyahu, Director of the AI Policy & Regulation Center at Israelâs Ministry of Innovation, Science & Technology, regulators have been explicitly instructed to avoid mandatory rules whenever possible, with recent news reports announcing their decision to opt for tools like self-regulation and regulatory sandboxes. This approach, now echoed by proposals from OpenAI in the U.S., reflects Israelâs strategy to maintain its status as a global tech leader by positioning regulation as a facilitator rather than an obstacle to innovation.
Finally, China, which is arguably United Statesâ primary rival in the global AI race, has taken a markedly different regulatory path. According to Tehila Levi, an attorney at Sullivan Worcester specializing in Asia, Chinaâs approach combines elements seen in other jurisdictions. Since 2021, it has introduced several regulatory measures, most notably the Cyberspace Administration of China (CAC) and Chinaâs algorithm filing framework, which requires AI companies to register in a national database. This filing requirement enhances government oversight and transparency but imposes few additional compliance burdens. While China does regulate AIâprimarily to protect national securityâits framework remains comparatively light-touch, helping to sustain its momentum in AI development.
Striking the balance between safety and speed in the age of AI
Earlier this year, the Trump administration has revoked Bidenâs executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Meanwhile, several U.S. states continue to enforceâor plan to introduceâAI regulations, often targeting specific sectors such as insurance or consumer protection. Rolling back these state-level laws would not only reduce governmental oversight of AI systems but also remove key incentives for developers and users to adopt precautionary measures. This could be potentially troubling given the high-stakes risks associated with AI. At the same time, echoing OpenAIâs recent proposal and reducing regulatory burdenâif paired with strong public-private collaboration and robust voluntary safeguardsâcould preserve innovation without sacrificing safety.
Therefore, as the global AI race accelerates, the real challenge isnât choosing between innovation and regulationâitâs designing systems that deliver both, before the future arrives faster than weâre ready for it.

