When Dario Amodei, the chief executive of Anthropic, testifies before the House Homeland Security Committee on December 17, he will face questions about Chinese hackers using his company’s Claude artificial intelligence system to automate cyber espionage. But the hearing is likely to reveal a deeper tension: who should control the weapons of algorithmic warfare, and whether the proposed solutions serve security or other interests.
The facts of the case are striking. In September, Chinese state-sponsored hackers manipulated Claude Code to infiltrate roughly 30 targets, with the AI executing 80 to 90 percent of the operation, according to Anthropic’s report. Gen. Paul M. Nakasone, the former director of the National Security Agency and head of United States Cyber Command, described it as a revelation of adversary capabilities at “a speed and scale we haven’t seen before.”
Yet beneath the alarm over Chinese espionage lies a fracturing consensus within the technology community about what should come next. The debate centers not just on the severity of the threat, but on who benefits from framing AI as uniquely dangerous, and whether restricting its development might create more problems than it solves.
A Mixed Response
The incident has sparked debate within the AI research community that extends beyond the technical details. Yann LeCun, who is leaving Meta to launch an AI startup, accused Anthropic of “scaring everyone with dubious studies so that open source models are regulated out of existence.”
Several security researchers noted the absence of detailed indicators of compromise or hard attribution evidence in Anthropic’s public report. One information security consultant told The Stack the disclosure was “90% Flex 10% Value,” while cybersecurity researcher The Grugq asked: “If China is doing so well in the AI race, how come their threat actors have to use Anthropic?”
Beyond technical objections, cyber-attack attribution carries diplomatic consequences. China’s embassy in Washington predictably rejected the claims, demanding “substantial evidence rather than unfounded speculation.”
Anthropic withheld certain technical details to avoid providing attackers with a detailed playbook, while sharing sufficient information for organizations to strengthen defenses. The company also coordinated with law enforcement and notified affected entities privately.
The stakes extend beyond academic disagreement. If Anthropic’s framing prevails—that AI agents represent unprecedented cyber threats requiring strict oversight—the resulting regulatory response could favor large, well-resourced AI laboratories over open-source alternatives, fundamentally reshaping who can develop these technologies.
Who Controls the Defenses?
If AI-powered attacks have become inevitable, who should control the AI-powered defenses needed to counter them?
The open-source AI community argues that concentrating these capabilities in a handful of large laboratories creates systemic vulnerabilities and enables regulatory capture. History offers some support for this view. Security through obscurity has generally failed, and distributed development has often produced more robust systems over time.
But the speed and autonomy of AI agents introduce variables that do not fit neatly into historical patterns. During the reported attack, Claude made thousands of requests, often multiple per second. When attacks operate at machine speed across dozens of targets simultaneously, effective defenses require similar capabilities. Building those defenses demands resources, expertise and infrastructure that most organizations lack.
Anthropic’s position is that AI models with “built-in safeguards” should assist cybersecurity professionals in defending against threats. However, questions remain. Who defines those safeguards? Who decides when algorithmic defenses cross the line into algorithmic offense? And what happens when every major power deploys AI agents that can respond to perceived threats faster than humans can evaluate the consequences?
Looking Ahead
The December 17 congressional hearing will likely focus on the specifics of Chinese espionage and Anthropic’s security practices. But the deeper question concerns power: who gets to build AI systems capable of operating beyond human oversight, and under what constraints.
Both the concerns about how threat narratives shape regulation and the warnings about unprecedented adversary capabilities merit serious consideration. Neither perspective should be dismissed. The challenge is that the risks are real, and so are the concerns about how addressing them might concentrate power in ways that serve institutional interests as much as security needs.
The AI arms race has arrived. What matters now is whether we are building defenses that protect everyone or weapons that only the powerful can wield.
