The Internet used to be a place where humans were the dominant force. That’s no longer true. As artificial intelligence systems evolve from passive tools to active decision-makers, a new class of threat is emerging—one that traditional security models weren’t built to address.
Things have shifted pretty quickly from generative AI to seemingly ubiquitous discussion of agentic AI. These aren’t just systems that analyze or generate content. They are autonomous actors capable of setting goals, making decisions and executing tasks without human intervention. And while they promise new levels of efficiency and automation, they also introduce new risks—ones that challenge the very foundation of trust online.
The Shift to Agentic AI
I connected with Stu Solomon, CEO of HUMAN Security, to talk about the challenges posed by agentic AI. He put it bluntly: “The Internet is no longer dominated by humans.” Bots, scrapers and AI agents now outpace human activity online—and the trend is accelerating. According to Solomon, that shift changes everything about how we define trust and protect digital ecosystems.
Agentic AI are already interacting with websites, mobile apps and APIs. They’re making purchases, scraping data and even attempting to mimic legitimate user behavior. The problem is that most defenses today are built to detect bots at login or checkout. They weren’t designed to handle intelligent agents that can evolve, adapt and act independently across the full customer journey.
The Problem with Static Trust
Traditional fraud prevention and bot mitigation tools are reactive by nature. They focus on specific threat vectors—usually bots—and make decisions at isolated points like login or transaction submission. But as Solomon points out, “Security teams need to understand traffic behavior, intent and context, regardless of whether the actor is human, bot, or AI agent.”
That’s why HUMAN Security is pushing for a new model: adaptive trust. Instead of relying on static checks, adaptive trust continuously evaluates context and behavior to determine whether traffic should be allowed, blocked, or governed more precisely.
This approach is core to HUMAN Sightline, now enhanced by a new technology layer the company calls AgenticTrust. It’s designed to provide actor-level visibility across humans, bots and AI agents—and make real-time decisions based on observed intent.
Understanding Intent in Real Time
AgenticTrust operates differently than legacy systems. It doesn’t just flag anomalies. It assesses click cadence, navigation patterns, behavioral anomalies and session consistency across billions of interactions to evaluate what an actor is trying to do, not just who or what they claim to be.
For instance, if an AI agent is scraping a website or making a purchase, the system determines whether that action aligns with approved behavior. Rather than penalize all AI traffic or ban entire user-agent categories, AgenticTrust provides a way to distinguish the trustworthy from the suspect. It’s a “trust but verify” model—built for the complexity of AI-driven interaction.
Open Standards and Cryptographic Identity
One of the more notable elements of HUMAN’s strategy is its commitment to open standards. The company recently open-sourced its HUMAN Verified AI Agent protocol, a method for AI agents to identify and authenticate themselves using public-key cryptography and HTTP Message Signatures.
It’s a step toward a more accountable Internet. Instead of spoofable headers and easily faked identifiers, AI agents can prove who they are cryptographically—an important capability as agent impersonation and scraping become more common.
“This project is more than a technical showcase,” says Solomon. “It’s a contribution to the trust layer for the agentic Internet: a future where AI agents must identify, authenticate and authorize themselves in order to operate freely and safely.”
Trust Becomes Infrastructure
The big picture here is that trust itself must become dynamic infrastructure—something that evolves with the behavior of digital actors, rather than something that’s granted once and assumed forever.
Solomon summed up, “This moment is about more than protection. It is about unlocking new value. Businesses that can distinguish between trusted and deceptive actors in real time will be best positioned to scale, innovate and lead in the AI era.”
The Internet isn’t human-only, but it can still be human-first—if we build the right trust architecture to support it. Agentic AI might change how the Internet works. Adaptive trust could determine whether it still works for people.