Language is the foundation of business, culture, and consciousness. But AI isnât just using our wordsâitâs reshaping them. Quietly, subtly, itâs dismantling the architecture of thought by eroding what we used to think: nouns.
We used to believe that naming something gave it power. Giving a thing a noun means tethering it to meaning, identity, and memory. But in the age of AI, nouns are dissolvingânot banned, not erasedâbut rendered functionally obsolete. And with them, our grasp on reality is starting to fray.
AI and the Architecture of Thought
AI doesnât see the world in things. It sees the world in patternsâactions, probabilities, and prompts. A chair is no longer an object; itâs âsomething to sit on.â A self is no longer an identity; itâs âa collection of behaviors and preferences.â Even brands, once nouns wrapped in mythology, are being reconstituted as verbs. You donât have a brand. You do a brand.
This linguistic shift isnât neutral. Itâs a collapse of conceptual anchors. In generative systems, nouns arenât centers of gravity. Theyâre scaffolding for action. This reflects a broader trend in how generative AI is reshaping communication across every industry. As they fade, so do permanence, authorship, and the idea of fixed meaning.
Recent research supports this trend. A study titled Playing with Words: Comparing the Vocabulary and Lexical Richness of ChatGPT and Humans found that ChatGPTâs outputs exhibit significantly lower lexical diversity than human writing. In particular, nouns and specific, stylistic words are often underused, suggesting that generative systems prioritize predictable, commonly used language while deprioritizing less frequent terms.
Further analysis of 14 million PubMed abstracts revealed a measurable shift in word frequency post-AI adoption. Words like âdelvesâ and âshowcasingâ surged, while others fadedâshowing that large language models are already reshaping vocabulary patterns at scale.
Sound familiar? It should.
AIâs Philosophical Ancestors: Orwell, Huxley, and the Future They Warned Us About
To understand their relevance, it helps to recall what George Orwell and Aldous Huxley are most famous for. Orwell authored 1984, a bleak vision of the future where an authoritarian regime weaponizes language to suppress independent thought and rewrite history.
His concept of Newspeakâa restricted, simplified language designed to make dissent unthinkableâhas become a cultural shorthand for manipulative control.
On the other hand, Huxley wrote Brave New World, which envisioned a society not characterized by overt oppression, but rather by engineered pleasure, distraction, and passive conformity. In his world, people are conditioned into compliance not through violence but through comfort, entertainment, and chemical sedation.
Both men anticipated futures in which language and meaning are compromised, but in radically different ways. Together, they map the two poles of how reality can be reconditioned: by force or indulgence.
Few realize that George Orwell was once a student of Aldous Huxley. In the late 1910s, while Orwell (then Eric Blair) studied at Eton, Huxley taught him French. Their relationship was brief but prophetic. Decades later, each would author the defining visions of dystopiaâ1984 and Brave New World.
After reading 1984, Huxley wrote to Orwell with a haunting message:
Whether in actual fact the policy of the boot-on-the-face can go on indefinitely seems doubtful⊠The future will be controlled by inflicting pleasure, not pain.
And thatâs precisely where we are now.
Orwell feared control through surveillance and terror. Huxley feared control through indulgence and distraction. Generative AI, cloaked in helpfulness, embodies both. It doesnât censor. It seduces. It doesnât need Newspeak to delete ideas. It replaces them with prediction.
In 1984, language was weaponized by force. In our world, itâs being reshaped by suggestion. What we have is not Artificial Intelligenceâitâs Artificial Inference: trained not to understand but to remix, not to reason but to simulate.
And this simulation brings us to a more profound loss: intersubjectivity.
AI and the Loss of Intersubjectivity
Humans learn, grow, and build reality through intersubjectivityâthe shared context that gives language its weight. It allows us to share meaning, to agree on what a word represents, and to build mutual understanding through shared experiences. Without it, words float.
AI doesnât participate in intersubjectivity. It doesnât share meaningâit predicts output. And yet, when someone asks an AI a question, they often believe the answer reflects their framing. It doesnât. It reflects the average of averages, the statistical ghost of comprehension. The illusion of understanding is precise, polite, and utterly hollow.
This is how AI reconditions reality at scaleânot by force, but by imitation.
The result? A slow, silent attrition of originality. Nouns lose their edges. Ideas lose their anchors. Authorship bleeds into prompting. And truth becomes whatever the model says most often.
AI and Accountability: A Case Study in Trust and Miscommunication
In one recent public example, Air Canada deployed an AI-powered chatbot to handle customer service inquiries. When a customer asked about bereavement fare discounts, the chatbot confidently invented a policy that didnât exist. The airline initially tried to avoid responsibility, but the court disagreed. In February 2024, a tribunal ruled that Air Canada was liable for the misinformation provided by its chatbot.
This wasnât just a technical glitchâit was a trust failure. The AI-generated text sounded plausible, helpful, and human, but it lacked grounding in policy, context, or shared understanding. In effect, the airlineâs brand spoke out of both sides of its mouth and cost them. This is the risk when language is generated without intersubjectivity, oversight, or friction.
The Linguistic Drift of AI: What the Data Tells Us About Language Decay
Itâs not just theoryâresearch is now quantifying how generative AI systems are shifting the landscape of language itself. A study titled Playing with Words: Comparing the Vocabulary and Lexical Richness of ChatGPT and Humans found that AI-generated outputs consistently use a narrower vocabulary, with significantly fewer nouns and stylistic words than human writing.
Building on this, an analysis of over 14 million PubMed abstracts revealed measurable shifts in word frequency following the rise of LLM use. While many precise, technical nouns faded, terms like âdelvesâ and âshowcasingâ surged. The shift is not random; itâs a statistically driven flattening of language, where standard, action-oriented, or stylistic terms are promoted, and specificity is sidelined.
Some researchers link this to a broader problem known as âmodel collapse.â As AI models are increasingly trained on synthetic data, including their outputs, they may degrade over time. This leads to a feedback loop where less diverse, less semantically rich language becomes the norm. The result is a measurable reduction in lexical, syntactic, and semantic diversityâthe very fabric of meaning and precision.
The implications are vast. If AI systems are deprioritizing nouns at scale, then the structures we use to hold ideasâpeople, places, identities, and conceptsâare being eroded. In real time, we are watching the grammatical infrastructure of human thought being reweighted by machines that do not think.
What AIâs Language Shift Means for Brands and Business Strategy
The erosion of language precision has significant implications for businesses, particularly those that rely on storytelling, branding, and effective communication. Brands are built on narrative consistency, anchored by nouns, identities, and associations that accumulate cultural weight over time.
However, as AI systems normalize probabilistic language and predictive phrasing, even brand voice becomes a casualty of convergence. Differentiation erodesâmessaging blurs. Trust becomes more complicated to earn and more uncomplicated to mimic.
As this Forbes piece outlines, there are serious reasons why brands must be cautious with generative AI when it comes to preserving authenticity and voice. Marketers may find themselves fighting not for attention but for authenticity in a sea of synthetic fluency.
Moreover, AI-powered content platforms optimize for engagement, not meaning. Businesses relying on LLMs to generate customer-facing content risk flattening their uniqueness in favor of whatâs statistically safe. Without human oversight, brand language may drift toward the generic, the probable, and the forgettable.
How To Safeguard Meaning in the Age of AI
Resist the flattening. Businesses and individuals alike must reclaim intentionality in language. Hereâs howâand why it matters:
If you donât define your brand voice, AI will average it. If you donât protect the language of your contracts, AI will remix it. If you donât curate your culture, AI will feed it back to youâstatistically safe but spiritually hollow.
- Double down on human authorship: Donât outsource your voice to a model. Use AI for augmentation, not substitution.
- Protect linguistic originality: Encourage specificity, metaphor, and vocabulary diversity in your communication. Nouns matter.
- Audit your outputs: Periodically review AI-generated materials. Look for signs of driftâhas your language lost its edge?
- Invest in language guardianship: Treat your brandâs lexicon like intellectual property (IP). Define it. Defend it.
- Champion intersubjectivity: Allow shared context in both personal and professional communication. AI can simulate, but only humans can mean.
The Necessity of Friction: Why Human Involvement Must Temper AI
Friction isnât a flaw in human systemsâitâs a feature. Itâs where meaning is made, thought is tested, and creativity wrestles with uncertainty. Automation is a powerful economic accelerant, but without deliberate pausesâwithout a human in the loopâwe risk stripping away the qualities that make us human. Language is one of those qualities.
Every hesitation, nuance, and word choice reflects cognition, culture, and care. Remove the friction, and you remove the humanity. AI can offer speed, fluency, and pattern-matching, but it canât provide presence, and presence is where meaning lives.
AIâs Closing Refrain: A Call to Remember Meaning
Emily M. Bender, a professor of computational linguistics at the University of Washington, has emerged as one of the most principled and prescient critics of large language models. In her now-famous co-authored paper, “On the Dangers of Stochastic Parrots,” she argues that these systems donât understand languageâthey merely remix it. They are, in her words, âstochastic parrotsâ: machines that generate plausible-sounding language without comprehension or intent.
Yet weâre letting those parrots draft our emails, write our ads, and even shape our laws. Weâre allowing models trained on approximations to become arbiters of communication, culture, and identity.
This is not languageâitâs mimicry at scale. And mimicry, unchecked, becomes a distortion. When AI outputs are mistaken for understanding, the baseline of meaning erodes. The problem isnât just that AI might be wrong. Itâs that it sounds so right, we stop questioning it.
In the name of optimization, we risk erasing the texture of human communication. Our metaphors, our double meanings, our moments of productive ambiguityâthese are what make language alive. Remove that, and a stream of consensus-safe, risk-averse echo remains. Functional? Yes. Meaningful? Not really.
The stakes arenât just literaryâtheyâre existential. If language is the connective tissue between thought and reality, and if that tissue is replaced with statistical scaffolding, thinking becomes outsourced. Once sharpened by friction, our voices become blurred in a sea of plausible phrasings.
Without intersubjectivity, friction, or nouns, we are scripting ourselves out of the story, one autocomplete at a time We are not being silenced. We are being auto-completed. And the most dangerous part? We asked for it.
Before we ask what AI can say next, we should ask: What has already gone unsaid?
In this quiet war, we donât lose language all at once. We lose it word by wordâuntil we forget we ever had something to say.
I asked brand strategist and storyteller Michelle Garside, whose work spans billion-dollar brands and purpose-driven founders, to share her perspective on whatâs at risk as automation flattened language. Her response was both precise and profound:
If language is being flattened, we need more people doing the opposite: excavating. Listening for whatâs buried beneath the noise. Uncovering the phrase that unlocks the person. Thatâs not a promptâitâs a process. And itâs a deeply human one.
When someone says something that landsânot because it sounds good, but because itâs true. You can see it in their body. You can feel it in the silence that follows. No algorithm can replicate that because that moment isnât statistical. Itâs sacred.
The risk isnât just that AI will get things wrong. Itâs that it will sound just right enough to stop us from looking deeper. To stop us from asking whatâs real. To stop us from finding the words only we could say.
We donât need more words. We need more meaning. And meaning isnât generated. Itâs remembered.
When it comes to language and AI, thatâs the line to carry forwardânot just because it sounds good, but because itâs true.