AI is shaping decisions everywhere — from customer service scripts and hiring workflows to drug discovery and national defense. But the truth is that while companies like OpenAI, Anthropic, xAI and Google continue to build really powerful AI models, no one fully knows how or why AI thinks or acts the way it does. Even the expert engineers building these models often can’t explain why they behave unpredictably, or why they sometimes go off-script. That’s what makes AI feel so magical — and also so dangerous.
“We know that AI can spew out interesting answers when you type something in,” said Kazu Gomi, CEO of NTT Research, in an interview with me. “But we don’t yet have the physics of AI. We can’t fully explain why those answers appear — or how to change them without unexpected consequences.”
Now, NTT Research, the R&D arm of IT giant NTT, wants to change that. Earlier this month, at the third edition of its annual Upgrade Summit in San Francisco, the company unveiled its new Physics of Artificial Intelligence Group, shortened as PAI Group, an interdisciplinary research team spun out of its Physics & Informatics (PHI) Lab in Silicon Valley. The group seeks to open up the black box of AI and build the foundational understanding needed for trustworthy, transparent and human-aligned systems.
Led by physicist and machine learning researcher Dr. Hidenori Tanaka, the group is partnering with academic collaborators at Harvard, Stanford and Princeton to explore what happens inside large-scale AI systems. But unlike most AI research teams, the group isn’t just focused on performance or deployment, but is asking deeper philosophical questions — hoping not only to fully understand how AI works, but also contribute meaningfully to the general industry mission of building more responsible and ethical AI.
What does it mean for an AI to be biased? How does hallucination arise — and how do we fix it? Can we design AI personalities responsibly? And what happens when machines begin to detect, mimic — or manipulate — human emotions? These are some of the key questions that the PAI Group wants to answer. And as Tanaka noted, “that’s what makes this moment so important.”
We Don’t Fully Understand AI
Gomi, who oversees NTT’s global research programs, likens today’s AI moment to physics before Newton. “For thousands of years, people knew that an apple falls to the ground,” he said. “But they didn’t know why. They couldn’t calculate the speed or the force. Then came Newton, and suddenly we had equations, theory, structure. That’s what we want for AI.”
In this framing, AI is no longer just code. It’s an emergent system — like weather, evolution, or the brain. “We’ve entered the era of complex systems,” Tanaka explained. “We can open up every neuron inside a neural network. But understanding behavior? That’s still elusive.”
That gap between performance and comprehension has real consequences. Fine-tuning — a common technique to reduce bias or improve safety — may not be enough. In fact, Tanaka’s earlier research was cited by U.S. policymakers for showing that fine-tuned models can still revert to harmful behaviors, especially when prompted creatively.
“We can’t just patch over bias,” Gomi told me emphatically. “We need to understand how bias gets built. Which neurons encode it. Which data introduces it. That’s the kind of foundation this group is working to create.”
Hallucinations, too, are under the group’s microscope. The team claimed it has found evidence that AI hallucinations may resemble human creativity — explaining it as AI’s attempt to fill in missing information in novel ways. “But if we want to control that behavior,” Gomi added, “we need a framework that explains it first.”
New Citizens Of The AI Republic
Perhaps the highlight of the group’s work is that it doesn’t stop at model architecture but extends to AI personality itself. Tanaka noted that the team wants to really understand how AI systems influence the people interacting with them. Tanaka warned that today’s AI personalities are optimized for comfort, coherence and user satisfaction. But what happens when a system reinforces our beliefs too well, or when it tunes itself for addictiveness?
“Do we want one AI personality for everyone?” Tanaka asked. “Or different ones for different people? These are societal design questions, not just technical ones.”
Tanaka wasn’t just theorizing. The group is already working with psychologists, neuroscientists and philosophers to help translate human concepts — like empathy, disagreement and kindness — into mathematical language that can guide AI behavior.
“We’re not just training systems,” he said. “We’re raising new citizens.” And with AI increasingly embedded in decision-making, how we shape these “citizens” could eventually shape us in return.
Growing AI The Right Way
While the group is still in its early days, their ambition is urgent. Gomi predicts this AI cycle won’t crash like the Metaverse or blockchain bubbles. “This touches too many sectors, too many decisions,” he said. “I don’t think it will tank anytime soon.”
That makes foundational work even more critical. “Our hope,” Tanaka said, “is that we can grow AI the way we grow society — by building shared understanding, defining values and learning how to intervene before it’s too late.”