Artificial intelligence has ignited a $150 million political battle over federal preemption. Congress must soon decide whether to include preemption language in the National Defense Authorization Act while the White House weighs an executive order that could override state rules. Two coalitions are racing to shape the outcome. One side, backed by some of Silicon Valley’s largest investors, wants to block state oversight and establish a single federal framework. The other, funded by safety-focused donor networks, is fighting to preserve state authority if Congress can’t pass meaningful national standards. Each has built a structured network of Super PACs, donors and advocacy groups. Their battle is about who writes the rules, who enforces them and whether states can act at all.
Advocating Against Federal Preemption
Public First, a bipartisan initiative led by former Representatives Chris Stewart (R-Utah) and Brad Carson (D-Okla.), has launched two affiliated Super PACs to support candidates who promote stronger AI oversight. Stewart said the effort aims to ensure “meaningful oversight of the most powerful technology ever created.” The group expects to raise at least $50 million for the 2026 cycle.
In addition to the Super PACs, Public First’s nonprofit arm backs stronger export controls on advanced chips, transparency requirements for AI labs and state-level regulations that address risks to children, workers and the public. The group opposes federal preemption efforts that would block state progress without establishing meaningful national safeguards. Public First states that 97% of Americans desire AI safety rules.
Last year, in parallel, Carlson cofounded the research think tank Americans for Responsible Innovation (ARI), where he still serves as its president. It quickly became one of the most active public-interest organizations in the AI governance space. ARI’s leadership includes Eric Gastfriend, co-founder and tech entrepreneur, and a board featuring, among others, former Representative Tim Ryan (D-Ohio), economist Erik Brynjolfsson, who directs the Digital Economy Lab at the Stanford Institute for Human-Centered AI, computer scientist Stuart Russell from the University of California, Berkeley and economist and legal scholar Gillian Hadfield from the University of Toronto and former policy adviser to OpenAI.
ARI prioritizes protections against AI-enabled scams and risks to minors, national security threats, and long-term frontier-model risks, while also calling for expanded National Institute of Standards and Technology (NIST) funding. NIST is a federal agency that develops technical standards and testing methods for emerging technologies.
ARI positions itself as independent of industry, funded by its founders and effective altruism-aligned donors focused on long-term AI risks. Critics argue that EA has used extensive funding to push overly restrictive regulations.
This coalition’s donor base is not dominated by Big Tech. Instead, it includes investors concerned about long-term risks and employees from safety-oriented labs, particularly Anthropic. The New York Times reported that Anthropic employees and executives have also explored political engagement, including discussions of a potential Super PAC to counter LTF’s $100 million.
This group’s policy stance relies on state action while Congress remains stalled. The RAISE Act, written by New York Assemblymember Alex Bores (D), who recently announced his candidacy for Congress, represents this approach. It requires safety disclosures and risk assessments. It also includes fines of up to thirty million dollars for noncompliance. Bores’s campaign became a lightning rod for the efforts of those who oppose AI regulation, making him the first formal target of one of the LTF Super PACs.
California’s Transparency in Frontier Artificial Intelligence Act (SB53), now enacted into law, follows a similar pattern. Public First and its allies argue that states are operating as laboratories that reveal what works, provide early enforcement and supply evidence needed to shape a future federal law with meaningful protections.
Defending Deregulation And Federal Preemption
The opposing coalition has consolidated around Leading the Future (LTF), the first to launch in August. They operate through a multi-layered structure: federal and state Super PACs run independent expenditure campaigns supporting pro-innovation candidates in primaries and general elections. Their nonprofit advocacy arms handle policy development, legislative scorecards, grassroots organizing and rapid response to opposition narratives. The network launched in New York, California, Illinois and Ohio and plans to expand nationally in 2026.
LTF is led by GOP strategist Zac Moffatt and Democratic operative Josh Vlasto. Their message: a patchwork of state laws will cost American jobs and cede AI leadership to China. During their launch, Moffatt and Vlasto told the Wall Street Journal that “There is a vast force out there that’s looking to slow down AI deployment, prevent the American worker from benefiting from the U.S. leading in global innovation and job creation and erect a patchwork of regulation.”
The network launched with $100 million from Silicon Valley investors, including Marc Andreessen, OpenAI cofounder Greg Brockman, and Perplexity.
Its purpose is to defeat candidates who support AI regulation and elect those who favor a federal framework aligned with industry interests. Its first target was Bores, whose congressional campaign is becoming the earliest example of AI regulation shaping electoral strategy.
LTF’s advocacy arm, Build American AI, started a ten-million-dollar national campaign calling for a unified federal law that would preempt state regulations. Nathan Leamer, its executive director and a former FCC adviser, posted on X that “the US won the Internet economy because we established a national framework for its proliferation. We should not allow for the balkanization of AI policy to hinder us.”
Beyond LTF, Big Tech companies such as Meta (Facebook’s parent company) have become an additional force in support of the deregulatory agenda. In August, Meta unveiled a California-focused Super PAC, Mobilizing Economic Transformation Across California, aimed at electing state candidates who support innovation over regulation. In September, Meta followed with a national PAC called the American Technology Excellence Project to back AI-friendly candidates in state races across the country. This mirrors the strategy used by the crypto industry, which proved that concentrated spending in state races can quickly reshape federal debates.
On the thought leadership side, the America First Policy Institute (AFPI), which is effectively the main policy and personnel hub for President Trump’s political movement and his current administration, unveiled its America First AI Agenda. It emphasizes widespread AI adoption for economic prosperity, worker-centric growth through high-paying manufacturing jobs, protecting children from AI dangers and defending against foreign adversaries.The agenda calls for streamlining state-level permitting approvals and repealing state laws that impose regulatory overhead as part of a drive in support of energy abundance.
AFPI’s AI team is led by Chris Stewart in a separate role from his leadership at Public First. This dual position reflects a deeper split inside the Republican Party. Stewart’s role at Public First aligns him with national security conservatives who support AI safeguards. His AFPI role connects him with pro-business conservatives who favor rapid innovation, a national framework and federal preemption of state laws.
Dean Ball, a former senior policy advisor for AI at the White House and now part of the AFPI AI team, stated that “At least some aspects of AI are inherently matters of interstate commerce, and thus the jurisdiction of the federal government. We should regulate those aspects of AI like one country, not 50 states.”
The Preemption Showdown
These coalitions diverge sharply.
Leading the Future and its affiliated groups argue that a single national standard is essential to maintain competitiveness. They frame state laws as costly barriers that could slow the development and deployment of advanced systems. Public First and its allied organizations counter that a weak federal law designed primarily to neutralize state protections would erode trust and leave consumers exposed. They argue that states have filled a policy vacuum.
The scale of spending reveals how quickly AI has moved to the center of American politics. The pressure for federal action is intensifying. Congress faces a decision point about the inclusion of preemption language in the must-pass National Defense Authorization Act. The administration has floated an executive order on preemption. With AI’s economic impact and labor displacement rising as voter concerns, the window for resolving this fight is narrowing rapidly.
