AI now runs in courtrooms, hospitals, airports, banks and several industries, becoming the crown jewel of many modern enterprises. However, protecting these AI systems in a quantum future is becoming increasingly difficult.
Somewhere between the optimism of generative AI and the acceleration of quantum computing is a growing risk that few organizations are addressing today. While many worry about adversarial prompts and model hallucinations, experts say those are the least of our problems.
David Harding, CEO of Entrokey Labs — a cybersecurity firm building quantum-resistant key infrastructure — warned that the real risk lies in how AI systems handle sensitive data. He argued that AI systems, and the massive volumes of sensitive data they ingest, may soon be the first victims of quantum-enabled cyberattacks. And most companies are walking into that future blind.
The Quantum Threat Isn’t Theory Anymore
Earlier this year, Nvidia CEO Jensen Huang described quantum computing as reaching “an inflection point.” While that statement sparked interest among investors, its implications for cybersecurity — particularly for AI-driven systems — haven’t fully sunk in. As researchers push closer to building scalable quantum machines, long-standing encryption protocols such as RSA and ECC could be broken, making previously secure data fair game.
In other words, the data feeding your AI today may be tomorrow’s biggest liability. This isn’t some distant sci-fi scenario. The groundwork has already begun. Nation-state actors are believed to be stockpiling encrypted data using what’s known as a “harvest now, decrypt later” strategy. Think of it like thieves stealing locked safes today knowing they’ll get the keys tomorrow.
Once quantum machines become powerful enough, they could retroactively decrypt troves of corporate secrets, defense communications and medical data, including everything passed through AI models today.
“Any electronic data is at risk from harvest now, decrypt later if it is not using digital keys resistant to today’s AI attacks and near-term quantum attacks,” said Harding. “Several countries including Russia, China, Iran and North Korea have well over 100,000 individuals solely focused on hacking our systems. Add automation into the mix, and the scale becomes nearly unmanageable.”
Quantum threatens all digital systems, but AI amplifies the risk. These models don’t just generate content — they ingest patient records, financial models, intellectual property and legal data. In autonomous systems, they make decisions. In others, they write code and trigger workflows. That puts entire AI pipelines — from training data to deployed agents — directly in the crosshairs.
“Quantum and AI-safe encryption has the same level of importance as the foundation of a building,” explained Scott Streit, Entrokey Labs’ chief scientist. “Without it, the structure collapses. There’d be no protection for customer data, IP or communications. In national security, satellites or precision weapons could be taken over.”
Falling Behind The Curve
Despite these risks, many enterprises still treat quantum computing as a future problem — something to solve by 2030. The U.S. National Institute of Standards and Technology (NIST) has laid out a path for adopting quantum-safe cryptography by 2035. But according to Harding, that timeline no longer reflects how fast both AI and quantum capabilities are evolving.
“The timeline is increasingly out of step with the pace of AI and quantum advancements,” said Harding. “Some believe AI is already breaking into encryption systems.”
And yet, most organizations continue to treat quantum-readiness as a long-haul IT project, involving years of consultations, infrastructure upgrades and vendor reviews. Harding refers to this pattern as “cyber inertia” — an outdated playbook for a much faster threat.
“We’re trying to solve a smarter threat with outdated answers,” Harding said. Streit added that “AI can already create math that top mathematicians can’t explain,” arguing that “the only way to win is by using AI to secure AI.”
To make matters worse, regulatory frameworks haven’t caught up. Neither the EU AI Act nor NIST’s AI Risk Management Framework say much about defending AI systems against quantum cryptographic threats, leaving a critical vulnerability unaddressed at the policy level.
What’s At Stake
The financial fallout from a breach caused by quantum decryption is hard to estimate. But the principle is simple: What’s considered secure today may not be tomorrow. That includes confidential model outputs, internal prompts, logged agentic decisions and sensitive metadata. Any of it could be exposed or tampered with.
“Think about how we respond to weather warnings,” Harding said. “If there’s even a 10% chance of a tornado, you don’t wait. You get to shelter.”
He added that this level of risk isn’t something CISOs can handle alone. “Quantum is a boardroom issue now — not just an engineering one. The scale of impact makes Y2K look like a warm-up act.”
If Trust Fails, AI Fails
While companies double down on AI performance, many remain dangerously naive about the risks embedded at its roots. As Harding put it, “The question is no longer whether quantum will impact AI systems, but how quickly organizations can adapt before it does.”
AI security depends not just on encryption, but on anticipating how fragile the entire ecosystem becomes when that encryption fails. If attackers can retroactively decrypt, reroute, or manipulate those systems, the blow to public confidence could rival or exceed any previous cyber event.
Trust is what gives AI its power. Lose that, and even the smartest models would collapse.
“We’ve built an entire era of decision-making on architectures that might be more fragile than we thought,” Harding said. “While companies chase optimization, adversaries are chasing the keys.”