Security operations centers today are under siege—not just from external adversaries but from within their own ecosystems. Security teams are overburdened, flooded with alerts, juggling a patchwork of tools, and struggling to keep up with the pace of modern threats. The traditional tiered model of detection and response—where human analysts triage alerts across levels—has hit its limit.
The result? Burnout, inefficiency, and a widening gap between detection and action. None of this should come as a shock to anyone who works in cybersecurity.
The promise of agentic AI has emerged as a potential turning point, though. This new class of artificial intelligence goes beyond traditional automation and machine learning. It’s designed to operate autonomously, learn from historical context, and make decisions that reduce the burden on human analysts—without turning the SOC into a black box.
The question is: Is agentic AI just the latest buzzword? Or could it truly reshape how security operations are run?
From Noise to Clarity
Today’s SOCs face a volume problem. A 2024 MSSP Market News study found that SOC teams receive an average of nearly 4,000 alerts per day, and almost two-thirds of them are ignored. Many alerts are false positives or duplicates, but the triage still eats up valuable analyst time. Traditional SOAR tools promised relief through automation, but most have failed to deliver beyond workflow ticketing and basic orchestration.
I recently spoke with Brian Murphy, CEO of ReliaQuest, about these challenges. He explained, “All SOAR really is is a ticket workflow distributor. It is a thing that moves a process from one team to another. That’s what the majority of customers have used it for, because it was too hard to use it to actually automate.” He cited a recent session with hundreds of customers: almost all had SOAR solutions, but only three had more than two true automations running.
That disconnect is what agentic AI aims to resolve.
What Makes Agentic AI Different?
A post from Deloitte Center for Technology, Media & Telecommunications explains, “As its name suggests, agentic AI has ‘agency’: the ability to act, and to choose which actions to take. Agency implies autonomy, which is the power to act and make decisions independently. When we extend these concepts to agentic AI, we can say it can act on its own to plan, execute, and achieve a goal—it becomes ‘agentic.’ The goals are set by humans, but the agents determine how to fulfill those goals.”
Unlike static playbooks, agentic AI systems are dynamic. They not only ingest data but actively learn from historical incidents, analyst feedback, and environmental context. They can retrieve telemetry from disparate systems—endpoint, network, identity, threat intel—and synthesize it to make real-time, transparent decisions.
Murphy strssed that in ReliaQuest’s GreyMatter platform, for example, agentic AI doesn’t operate behind closed doors. It makes decisions that analysts can review, audit, and adjust. “The other thing about our AI is it’s transparent to the customer,” said Murphy. “They see every decision that that agentic model made along the way and why. Each customer is essentially training their own model in a protected way.”
This is a critical distinction in an industry rightfully wary of handing over too much control. While agentic AI can act independently on routine actions—such as resolving alerts based on travel patterns or resetting accounts for terminated employees—it should still defer to human oversight for higher-stakes decisions.
Burnout, Tiered Models, and the End of “Tier One”
Perhaps the most compelling argument for agentic AI isn’t the tech—it’s the human impact. Cybersecurity burnout is real and escalating. A core contributor is the rote, disconnected work of triaging Tier One alerts, many of which are repetitive and low-value.
Murphy doesn’t mince words here: “We should stop using human beings to do Tier One alerts. We should stop using human beings to do Tier Two alerts.” He believes AI should handle the grunt work—pulling logs, cross-referencing IPs, contextualizing user behavior—so that humans can focus on meaningful decisions.
The implications are profound. Removing the tiered model could free up time not just to reduce fatigue, but to develop more strategic, business-aware security professionals. That, in turn, strengthens the security program holistically—giving teams the breathing room to hunt for threats, analyze risk trends, and build cross-functional leadership capabilities.
Not a Replacement—A Reboot
Despite the acceleration of autonomous capabilities, Murphy is clear that Agentic AI isn’t about cutting jobs. The cybersecurity need is still far beyond the capacity of most security teams. Rather, the vision is to up-level skillsets, fill in operational gaps, and create capacity where none exists today.
“We’re a long, long way before we see this as like a reduction in the amount of jobs,” he said. “It’s actually going to give us time to build leaders in security and give our cybersecurity teams time to learn the business and develop themselves”.
In other words, the goal isn’t fewer people—it’s smarter work.
The Bigger Picture
ReliaQuest’s recent $500 million funding round, valuing the company at $3.4 billion, shows that investors are betting big on this new model. The company now serves over 1,200 enterprise customers, with annual recurring revenue exceeding $300 million and a growth trajectory aimed at going public. Unlike many peers, it’s doing so profitably, reinvesting in product innovation and global expansion—not just sales.
But while ReliaQuest may be leading the charge, the trend is industry-wide. CISOs are increasingly prioritizing AI-powered platforms that reduce dwell time and boost analyst effectiveness without further fragmenting the toolset.
The risk isn’t that agentic AI will take over—it’s that organizations who ignore it may fall behind.
Bottom Line
Agentic AI may not be the silver bullet for every SOC, but it’s a step toward something security professionals have been demanding for years: visibility, speed, and sanity. If it delivers even a fraction of its promise—fewer false positives, faster containment, and analyst relief—it could very well represent the beginning of a smarter, more sustainable era in cybersecurity operations.
Because in the end, it’s not about replacing people. It’s about empowering them—with time, tools, and clarity.