Getting in on the ground floor of a big tech transformation means covering some of the previously unknown innovations that are not yet household names, but will be in the future.
Case in point – we’re now in the age of agentic AI. People are hearing more about AI agents. But not many of them know about something called NANDA.
In fact, it’s hard to Google this and get accurate results. You get a bunch of links to a nursing organization. The NANDA that’s going to power the future of the global Internet actually came out of MIT, and it’s mostly only a known quantity (if you’ll pardon the nod to quantum) among data scientists and people in other similar roles.
But it’s probably going to be a major influence on our technology in just a few years.
What is NANDA?
NANDA is essentially a system to provide a full platform for agent interactions. It’s a protocol for a new AI Internet that’s modified and evolved to handle the capabilities of LLMs.
One of the most prominent writers on NANDA, Rahul Shah, describes it as a “full stack protocol” where agents have cryptographic identities – we’ll get back to that in a minute.
“NANDA does not replace A2A or MCP,” Shah writes, citing Agent to Agent protocols and the Model Context Protocol that has arisen to handle what you might call the ‘AI API race.’ “Instead, it provides the naming, verification, and economic backbone that allows agents to function in real-world, distributed environments — securely, scalably, and autonomously. The goal is to enable a self-sustaining ecosystem, where useful agents are rewarded and trusted — while spammy or malicious agents can be excluded based on cryptographic audit trails.”
In terms of platform features, there’s an agent registry, and the system uses dynamic resolution logic to provide routing for agent transactions. There’s also auditing, and distributed ledger technology, where NANDA uses zero-knowledge proofs to verify what agents do.
But all of this is kind of a high-flown way to describe what NANDA is.
Think of it a different way that’s more intuitive and has to do with how AI agents resemble people.
AI Agents Line Up to be Counted
In so many ways, the idea of the AI agent is like a digital twin of a person – in other words, we view these agents as having those cognitive abilities that individual people have. We can even give them names and avatars, and make them seem very human indeed. They can pass all kinds of Turing tests. They are discrete entities. They’re like people.
If you take that metaphor further, NANDA is a protocol that’s sort of like an organizational system for people. At a company, you have an org chart. If you’re choosing teams for softball, you have a roster or a list of names. A teacher in a classroom has some kind of document to identify each student.
This is the kind of thing that NANDA develops and orchestrates. It’s a system for these AI agents to be known and understood – in effect, you’re asking: “who are they? And what do they do?”
All of this takes place in the context of multi-agent systems where AI agents are working together to create solutions.
More on NANDA
I sat through a panel on AI at IIA, where some of the foremost people in this field talked about NANDA and everything around it.
My colleague Ramesh Raskar characterized this as using the “building blocks” for new agentic systems.
Investor Dave Blundin mentioned a “litany of useful functions” and a need for a system of micropayments for services.
“When this happened on the internet, nobody could figure out the revenue model, and then it all moved to ad revenue, because it’s just: ‘throw some banner ads on it, and throw it out there,’” he said. “That’s not going to work with AI agents. You don’t want these things marketing (to people).”
Aditya Challapally mentioned three big risks inherent in building these systems: trust, culture and orchestration.
“When we say culture, we mean things like: ‘what are the societal standards for how an agent can interact with you?’ (for example) can an agent DM you on LinkedIn, on behalf of another person, or do they have to say they’re an agent, or something like this, … establishing that sense of culture. And then the third piece of this is orchestration, which is … how do agents talk to each other from a more (organized) protocol setup?”
Panelist Anil Sharma spoke to a kind of wish list for the new protocol.
“I would like to see application sustainability,” he said. “I would like to see this in social impact, in areas such as agriculture and other places … because this is where the data and value is locked across ecosystems, beyond enterprises into non-profit and government (systems).”
And panelist Anna Kazlauskas talked about the necessity of data ownership.
“You can imagine, a couple of years out, you’ve got an AI agent, I picture 10 AI agents, that can go and autonomously do work, and maybe even earn on (a user’s) behalf and collaborate with others,” she said. “And I think one of the risks is that there’s a single platform (for) all of those agents, right? And so I think especially as your AI agents start to produce real economic value, it’s really important that you actually have kind of sovereignty and true ownership over that.”
Blundin, in talking about the “unbundling” of services, mentioned a related concern: that AI could build services more efficiently than companies, putting companies on their toes, enabled by a protocol like NANDA.
That’s a bit more of a window into how NANDA will work, and what it is supposed to do.
Coming Soon
So, although you haven’t heard much about NANDA yet, you’re going to. I thought it was helpful to provide that metaphor to show the various ways in which new protocols will treat AI agents like people – giving them names, identities, jobs, roles, and more, as they collaborate and work together, hopefully on our behalf, and to our benefit.