Ed Zitron is an AI skeptic.
He’s the CEO of EZPR, a media and public relations company and he hosts the Better Offline Podcast with iHeartRadio and Cool Zone Media. His book Why Everything Stopped Working is scheduled to be published by Penguin Random House in 2026.He writes the newslerter Where’s Your Ed At.
He agreed to answer a few questions about his take on artificial intelligence and here’s how it went:
John Navin: You’ve been described as an AI skeptic. What is your central objection to artificial intelligence?
Ed Zitron: So, “artificial intelligence” is this vast, ungainly term that can refer to everything from the algorithms that choose what ads to run to Large Language Models like GPT or Claude.
The AI bubble has been inflated by conflating all kinds of AI with Large Language Models, as if the hundreds of billions of dollars of capex are going into things other than building out capacity for ChatGPT, or capacity for AI compute demand that I don’t believe exists.
AI issues
My objections are fairly straightforward:
- Large Language Models are inherently incapable of the kind of automation that boosters, investors and analysts have claimed due to the probabilistic nature of their outputs, which are always guessing what the next thing to do should be rather than having any actual “intelligence” or “skill,” because LLMs do not have consciousness or “thoughts.”
- Generative AI companies are horribly unsustainable, burning massive amounts of money to make meager revenues. By my estimates, across all generative AI companies, hyperscalers and neoclouds, there’s only around $61 billion in total revenue in AI, and that’s off of hundreds of billions of dollars of capital expenditures and venture capital.
Navin: How would you describe the recent announcement of an Nvidia/OpenAI partnership?
Zitron: It’s a farce.
Nobody really knows exactly how this deal will work, but based on reports, OpenAI will receive $10 billion in a month once the deal is finalized – and it isn’t finalized by the way! – and then the other $90 billion will be contingent on them building 9 or 10 gigawatts of data centers, which by my calculations will cost at least $32.5 billion and 2.5 years per gigawatt.
While many have misreported that this money would be “used to build data centers,” the problem is far more obvious: OpenAI cannot afford to build a single data center, nor can they afford to pay their $300 billion in compute that they’ve promised to pay Oracle starting, I believe, next year or 2027 (again, all of this is very vague!).
Navin: What is your definition of “generative AI” and what are the issues it creates or may create?
Zitron: Generative AI is shorthand for Large Language Models, which are in and of themselves the entire focal point of this bubble.
I believe the term “generative AI” is problematic because it suggests LLMs have intelligence, which they do not.
Navin: Open AI admits that AI hallucinations are “mathematically inevitable.” What are your thoughts on this problem?
Zitron: My thoughts are that this problem has been discussed for years, has always been an issue, will continue to be an issue, and ultimately makes generative AI unfit to handle the kind of mass-scale automation they are saying is possible.
Navin: Microsoft and Nvidia “rent out” GPUs. What does that mean and what is its relationship to AI?
Zitron: NVIDIA doesn’t rent out GPUs, they sell them to companies like Microsoft, who put thousands of them inside of massive, expensive servers, “clustering” them using high-speed networking. These GPUs are then rented to customers who want to run their own AI models (or instances of AI models) on Microsoft’s hardware, and Microsoft also uses their GPUs to power services like Microsoft 365 Copilot and Github Copilot.
Navin: The Wall Street Journal this month reported that tech leaders say it’s “nearly impossible to measure the impact of AI on business productivity.” Can you explain how much of a problem this might be?
Zitron: It’s a pretty big one considering that this is apparently some sort of world-changing technology. If you can’t measure the impact of something you are paying for, how do you know if it’s worth paying for?
Navin: What’s the connection between “AI compute,” “hyperscalers,” and the potential revenue for businesses involved?
Zitron: AI compute is the term for hyperscalers renting out those GPUs, and the revenue they’re making is, for the most part, quite small. Outside of OpenAI’s compute – $10 billion of Azure cloud revenue paid at-cost, meaning that Microsoft almost certainly takes a loss – Microsoft is barely making $3 billion in total revenue from AI, and that includes AI compute, but definitely isn’t all of it, because that also includes Microsoft 365 Copilot and Github Copilot.
Navin: What’s the problem with creating AI “co-workers” as Anthropic and OpenAI are developing?
Zitron: LLMs are not good at taking distinct actions, and AI “agents” don’t exist because of the lack of reliability that’s present in any probabilistic model. These “co-workers” they’re claiming to make are non-existent.
I think this is patently false.
Navin: Why do you say that?
Zitron: AI is a marketing term, and in the case of generative AI, it’s an attempt to create the new growth vehicle for a tech industry that’s running out of hypergrowth ideas (see: https://www.wheresyoured.at/rotcombubble/). It is not “a form of capitalism dependent on data brokering” unless you consider them training on the internet, and even that is hardly what we’d call data brokering – it’s just plain old theft.
Because this person is referring to “teaching students about responsible use,” it’s clear they mean generative AI, so I’ll be clear that they’re categorically wrong here, because generative AI is what happens when financialization captures innovation.
LLMs – because they either be run independently or plugged into theoretically anything – are a magical-sounding tool, a super-powered “thing” that can suddenly create entire new consumer and SaaS business lines, at least in theory.
Does it work in practice? No.
Sources tell me that Microsoft only has 8 million active paid licenses of their Microsoft 365 AI Copilot. Nobody wants these features! They’re not popular! And this is in part because LLMs’ outcomes and abilities are all mostly the same – summarizing, searching, generating, and so on – meaning that it’s hard to create a truly differentiated product.
And the reason I say it’s financialization capturing innovation is that generative AI is hellishly expensive.
Perplexity spent 164% of their revenues on AWS, Anthropic and OpenAI last year – and despite what some say, the cost of inference (creating outputs) is going up (https://blog.kilocode.ai/p/future-ai-spend-100k-per-dev), all while the actual abilities of these models (and thus the products connected to them) mostly stay the same.
Stats courtesy of FinViz.com. Charts courtesy of Stockcharts.com.
No artificial intelligence was used in the writing of this post.
More analysis and commentary at johnnavin.substack.com.