A leaked draft executive order reveals the Trump administration is preparing to challenge state AI laws as unconstitutional, condition federal funding on state compliance and deploy the DOJ to litigate against non-conforming states.
The industry agrees on the need for federal policy that would preempt a patchwork of state laws. But not all speak with the same voice.
The draft is a bold move towards halting state action without a national framework in its place. Supporters argue that a unified federal approach is necessary to protect U.S. competitiveness, prevent compliance costs from ballooning across fifty jurisdictions and keep pace with global rivals accelerating their own AI strategies.
Earlier this summer, the proposal for an AI legislative moratorium was not included in the One Big Beautiful Bill Act, the Republican budget reconciliation package. The Senate was unable to unify support with prominent MAGA Republicans like Senator Josh Hawley who recently stated that “as a republican who believes in federalism I think it’s a strange argument for some republicans to make in this building that all of a sudden we should say to the states ‘no actually you shouldn’t do anything.’”
In July, the AI Action Plan set the direction for an accelerationist vision—the belief that AI development should proceed as fast as possible with minimal regulatory friction. It framed AI progress as a geopolitical race for economic and national security dominance, echoing preferences in the Silicon Valley venture community while diverging from civil society voices calling for a more balanced approach.
What Is In The Draft
The draft order seeks to block state-level AI laws and create a minimally intrusive national standard. The administration argues that state laws, such as California’s Transparency in Frontier Artificial Intelligence Act, impose harmful compliance burdens and undermine U.S. competitiveness. It resurrects several of the elements of the failed moratorium proposal.
The order directs the creation of a DOJ AI litigation task force to challenge state AI laws as unconstitutional if they regulate interstate commerce. Its reach would extend well beyond California. Colorado, Illinois and other states have already enacted AI laws covering employment decisions, consumer disclosures and algorithmic accountability. If the DOJ task force proceeds, these laws could face federal legal challenges.
It commissions a Commerce Department review to identify state AI laws that conflict with the administration’s AI policy and refer them to the DOJ task force. The goal is to identify laws that require models to modify outputs based on what the administration terms “woke” or that compel disclosures violating First Amendment protections.
The order commands Commerce to outline funding restrictions to non-compliant states. Federal agencies must review discretionary grants and consider conditioning awards on states refraining from enacting or enforcing conflicting laws during any year in which they receive funding.
The FCC would create a national disclosure standard to preempt state laws. The FTC would apply its powers over unfair and deceptive practices to explain when state laws requiring output alterations are prohibited.
Lastly, the order directs David Sacks, the special advisor for AI and crypto, and James Braid, director of legislative affairs, to draft legislation establishing a federal regulatory framework. Critics see the administration caving to the Silicon Valley lobby.
What The Industry Wants
Not all the industry thinks like the accelerationists in the Valley.
Traditional enterprise technology companies strike a more careful balance. Firms like Microsoft and IBM support innovation but stress that trust and accountability are essential for widespread adoption. In its AI Action Plan input, Microsoft emphasized strong standards support. IBM highlighted clear roles for developers and deployers and called for controls on high-risk uses. Together, these companies argue that federal standards can reduce fragmentation while building trust vital to long-term credibility.
Anthropic is one of the most vocal proponents of safety and national security regulation. The company supports unified federal AI regulation, opposes blanket moratoriums on state laws and has backed state safety measures while federal action lags. Sacks, speaking on X earlier this year, accused Anthropic of carrying out a “sophisticated regulatory capture strategy based on fear-mongering.”
Where AI Policy Goes From Here
The administration and its allies are moving quickly into the policy vacuum Congress left open. Their approach offers real benefits: reduced fragmentation, clearer industry incentives and reinforcement of the innovation culture that has fueled American entrepreneurship.
Yet sustained growth will require something more. Markets need trust to avoid the boom-and-bust cycle that defined past technology waves. Alongside federal action, modern governance alternatives such as dynamic governance models and independent verification organizations can help build that trust. Dynamic oversight and independent verification allow rules to adapt quickly, with checks coming from trusted third parties.
The administration’s aggressive posture to nuke state AI laws should remind Congress that only legislation grounded in democratic debate can provide a stable framework that lasts. If signed, the order would mark the most aggressive federal intervention into state tech regulation in decades and set up an inevitable court battle over the limits of executive power. Congress, meanwhile, has shown little appetite to act. The question is whether that changes before the states and the White House collide.
