AI usage statistics from OpenAI ChatGPT, Anthropic Claude, and Ipsos show how people use AI daily, from prompts to work adoption, amid low trust.
Three heavyweight studies have just landed, each pulling back the curtain on what AI usage really means in practice. OpenAI released AI usage data from more than a million sampled ChatGPT conversations spanning mid-2024 to mid-2025. Anthropic published a rare analysis of Claude AI usage statistics in its Anthropic Economic Index,, including a first-of-its-kind dive into enterprise API traffic. And Ipsos, the global market research firm, surveyed more than 23,000 adults across 30 countries for its AI Monitor 2025.
Together, these reports give us something we rarely get in the AI hype cycle: actual evidence. Who’s using these systems, what they’re doing with them, how companies are (and aren’t) deploying them, and what the public says it thinks about all of this.
The Ipsos study in particular is useful because it confronts the gap between perception and practice. Every economics student learns about the tidy fiction of homo economicus- the rational man who declares one thing, but then wanders outside of the economic textbooks and does another. In academic vernacular, this refers to the gap between stated and revealed preferences.
It’s a gap that feels oddly familiar when looking at how the world is adopting AI. People report one thing in surveys, but the usage logs from OpenAI and Anthropic suggest they do another. To see this more clearly, it’s worth looking at the main findings from the three reports.
The mundane reality of “killer apps” and AI effectiveness
OpenAI’s dataset of over a million ChatGPT conversations tells us something sobering: people are not using AI to plan moon colonies or unlock superintelligence. They’re asking for writing help, practical guidance, and quick information lookups. Those three categories alone make up nearly 80 percent of ChatGPT traffic. Computer programming accounts for just 4 percent. Therapy-like reflection barely reaches 2 percent.
Even in professional settings, “writing” is king but not the kind one would imagine from flashy marketing reels. Two-thirds of those queries are people asking the system to polish something they already wrote.
Anthropic’s Claude paints a similar picture, though its users skew differently. Coding dominates (36 percent), but education and science are rising quickly, to 12.4 and 7.2 percent respectively. And Claude users are delegating whole tasks more often, handing over directives like “you do it” rather than engaging in step-by-step prompting.
Across both platforms, the long tail of exotic use cases exists, but adoption is clustering in the obvious sweet spots: the tasks where models perform well and barriers are low. The sci-fi remains mostly in the marketing slides.
Work vs play: the split realities of AI usage
Here’s where things get messy. OpenAI reports that ChatGPT’s work usage has dropped from 40 percent to 28 percent in the past year, while personal tinkering has jumped to nearly three-quarters. Ipsos’ survey confirms this broad perception: in many countries, AI feels more like a personal assistant than an enterprise backbone.
But Anthropic tells a different story. Its enterprise API data suggests U.S. workplace use is rising sharply—40 percent of employees now use AI at work, up from just 20 percent in 2023. The API logs show concentrated, automation-heavy deployments: debugging web apps, building business software, even designing AI systems themselves.
So which is it? The truth may be in the division of labor. Chat interfaces are for casual users and side projects. APIs are where the serious business happens.
As the Claude report itself warns, “whether today’s narrow, automation-heavy adoption evolves toward broader deployment will likely determine AI’s future economic impacts.” In other words, the adoption curve may be less about decline versus growth and more about what kind of usage becomes dominant.
The AI Trust Paradox
Ipsos’ global survey shows the ambivalence in stark numbers. Fifty-four percent of respondents said they trust governments to regulate AI responsibly. Only 48 percent said they trust companies to keep their data safe. The split is narrow, but telling.
Sam Altman himself seemed to embody the paradox at the Paris AI Summit. “Safety is integral to what we do… We’ve got to make these systems really safe for people, or people just won’t use them. It’s the same thing and we’ll work super hard on that,” he told the audience. Then, almost in the same breath: “That’s not actually the main thing that we’ve been hearing about — the main concern has been ‘can we make this cheaper, can you have more of it, can we get it better and more advanced’.”
Safety is mentioned, but not dwelled upon. The louder themes are scale, cost, and capability. The paradox is that people say they distrust AI companies, yet the usage data shows they keep rewarding them with daily reliance.
Barriers hiding in the AI fine print according to Anthropic and Ipsos
Why has corporate adoption not gone fully mainstream? Anthropic’s Claude report is blunt: realizing productivity gains depends less on frontier capabilities than on the messy details of deployment. Profitably adopting AI, it notes, often requires costly restructuring of processes, retraining workers, and other sunk-cost investments. In other words, AI is not plug-and-play. It is a reengineering project that involves rethinking how the business runs.
The same report highlights another bottleneck: context. For AI to deliver in complex, high-stakes settings, it needs rich, well-structured information tailored to the task. Many firms can’t yet provide that. Supplying the right context often requires costly data modernization and organizational changes, which makes effective deployment slower and more expensive than the hype suggests.
On the individual side, Ipsos points to a different kind of barrier: demographics. Adoption remains skewed toward young, male, well-educated users. That shapes who benefits first and who is left out. And there’s irony in the details: the most common personal use cases, guidance and information-seeking, are also the ones most vulnerable to misinformation and hallucination.
Between AI habit and hesitation
Taken together, OpenAI, Anthropic, and Ipsos sketch a clear picture. AI is predominantly used for the ordinary: fetching information, editing emails, fixing code. OpenAI logs suggest ChatGPT’s work use is falling. Ipsos finds AI is seen as a personal helper. Yet Anthropic’s enterprise data shows 40 percent of U.S. employees already using AI at work. What looks like contradiction may simply be layers: personal play on the surface, invisible integration underneath.
However, another AI paradox is glaring: adoption is surging, yet faith in the builders is not. Perhaps the real risk is not whether people will abandon AI, but whether they will normalize dependence on systems they claim to distrust. The danger, and maybe the opportunity, is that AI’s future will not be decided by what we say, but by what we keep doing in the prompts.