The hype term “agentic AI” is the latest trending buzzword to repackage pie in the sky AI ambitions, but it does not allude to any particular advancement that might achieve them. It amplifies the overpromising narrative that we’re rapidly headed toward a great leap in autonomy – most extraordinarily, toward the most audacious goal of all, artificial general intelligence, the speculative idea of machines that could automate virtually all human work.
Setting unrealistic expectations compromises real value. Generative AI and predictive AI deliver concrete opportunities that will continue to grow, but the claim that technology will soon hold “agency” is the epitome of vaporware. It only misleads, setting up the industry for costly, avoidable disillusionment.
Most high-tech terms – such as machine learning, predictive modeling or autonomous driving – are legit. They represent one of two things: a specific technical approach or a novel goal for technology. But the terms “agent” and “agentic” fail in both respects: 1) Most uses of “agentic” do not refer to any novel technical methodology and 2) the ambition of increasing autonomy is not new – even as the word falsely implies otherwise on both accounts. Here’s a breakdown of those two failings and their ramifications.
1) “Agentic” Does Not Refer To Any Particular Technology Or Advancement
“Nothing draws a crowd quite like a crowd.” —P.T. Barnum, 19th century circus showman famed for hoaxes
“Agentic AI” poses as a credible near-term capability, but it represents only the most self-evident goal there could be for technology – increased automation – not a means to get there. Sure, we’d like a large language model to complete monumental tasks on its own – including gathering and assimilating information and completing online tasks and transactions – but labeling such ambitions as “agentic” does not make them more feasible.
The term “agentic AI” intrinsically misleads. Its sheer popularity widens the belief that technology will soon become capable of running much more autonomously, but the buzzword does not refer to any particular technical approach that may get us there. Its trendiness serves to institutionalize the notion that we’re nearing great new levels of automation – “agentic AI” is so ubiquitous that it may sound “established” and “real” – and this implies the existence of a groundbreaking advancement where in fact there is none.
Despite the fact that the vast majority of press about “agentic AI” only promotes this hype narrative with no substance to support it, autonomy itself is often a worthy goal and researchers are conducting valuable work in the pursuit of increasing it. For example, a recent collaboration between Carnegie Mellon University and Amazon curates a large testbed of modest tasks in order to assess how well LLMs can manage them autonomously. This study focuses on information retrieval tasks, such as “Retrieve an article discussing recent trends in renewable energy from The Guardian” and “Retrieve a publicly available research paper on quantum computing from MIT’s website.” The study evaluates clever approaches for using LLMs to navigate websites and automatically perform such tasks, but I would not say that these approaches constitute groundbreaking technology. Rather, they are ways to leverage what is already groundbreaking: LLMs. As the study reveals, the state of the art currently fails at these modest tasks 43% the time.
2) “Agentic” Presents No New Goal Or Purpose
“Agentic AI” spotlights machine autonomy as if it were a new ambition, but it’s an old, self-evident goal. There’s no new, revolutionary thrust at play. While the buzzword is somewhat malleable and fuzzy, it generally refers to the desire for increased autonomy – “agentic AI” means hypothetical machines that could perform substantial tasks on their own. This has always been a core, fundamental objective. The very purpose of any machine is to automate some or all of what would otherwise be carried out by a person or animal. Put another way, we build machines to do stuff.
By reiterating our innate desire to automate, “agentic” only states the obvious. Sure, the more machines can safely do for us, the better. But there’s a fairly stubborn limit to the scope of tasks that can be fully automated with no human in the loop. For example, predictive AI instantly decides whether to allow each credit card charge, whereas the wholesale replacement of physicians with machines is a very long way off at best. “Agentic AI” is as redundant as “evil Sith Lord,” “book library” or “data science.”
To be clear, autonomy is often a worthy goal and there is potential for LLMs to excel, at least where the scope of automation is somewhat modest. Economic interests exert pressure to increase autonomy – and various societal concerns exert pressure in both directions. But the scope of unleashed machine autonomy only increases quite slowly. One reason is that technology doesn’t improve as quickly as advertised. Another is that cultural and societal inertia tends to spell slow adoption.
The Farfetched Notion Of Machine “Agency”
There’s another problem with using the words “agent” and “agentic” to evoke the goal of autonomous machines: Crediting machines with “agency” is fantastical. This doubles down on AI’s core mythology and original sin, the anthropomorphization of machines. The machine is no longer a tool at the disposal of humans – rather, it’s elevated to have its own human-level understanding, goal-setting and volition. It’s our peer. Essentially, it’s alive.
The spontaneous goal-setting that comes with agency – and its resulting unbottleability – have been seeping into the AI narrative for years. “AI that works doesn’t stay in a lab,” writes Kevin Roose in The New York Times. “It makes its way into weapons used by the military and software used by children in their classrooms.” In another article, he wrote, “I worry that the technology will… eventually grow capable of carrying out its own dangerous acts.” Likewise, Elon Musk, one of the world’s most effective transmitters of AGI hype, announced safety assurances that cleverly imply a willful or dangerous AI. He says that his company’s forthcoming humanoid robot will be hardwired to obey whenever anyone says, “Stop, stop, stop.”
The story of technology taking on a life of its own is an age-old drama. We need to see this high tech mythology for what it is: a more convincingly rationalized ghost story. It’s the novel Mary Shelley would have written had she been familiar with algorithms. The implausible, unsupported notion that we’re actively progressing toward AGI – aka artificial humans – underlies much of the hype (and often overlays it explicitly as well). “Agentic” invokes this narrative.
Despite the unprecedented capabilities – and uncanny, seemingly humanlike qualities – of generative AI, the limit on how much human work can be fully automated will continue to only very slowly budge. I believe that we will generally need to settle for partial autonomy.
Don’t buy “agentic AI” and don’t sell it either. It’s an empty buzzword that, in most uses, overpromises. The AI industry runs largely – although certainly not entirely – on hype. To the degree that it continues to overinflate expectations, the industry will ultimately face a commensurate burst bubble: the dire disillusionment and unfulfilled debt that result from unmet promises.