ChatGPT was released three years ago today, on November 30, 2022. Before going any further, it’s worth being clear about what this piece isn’t. It isn’t about personalities, stock prices, funding rounds, the AI bubble, corporate drama, or the recent favorite narratives about winners and losers.
Those stories are loud because they’re easy to tell. They let us talk about money instead of meaning, hype cycles instead of human consequences. What matters now, and what we’ve spent three years largely avoiding, is the social and psychological impact of living beside a machine that reshapes how we think, judge, decide, and imagine. That’s the terrain this piece belongs to.
Three years after ChatGPT arrived, one thing has become clear: people still want to pretend this story is about tools. They want clean narratives about productivity boosts, model sizes, or clever prompts, anything that keeps them from asking more complex questions. When you talk about judgment, cognition, values, or even faith, some roll their eyes because it forces them to look past the gadget and into the human mind using it.
And this is the moment to confront it. Three years is enough time for habits to harden, for judgment to drift, and for a second mind to become so normal that we stop noticing its influence. If we don’t name what’s happening now, by year five, we won’t be describing the effects; we’ll be living inside them. This is the last window to choose consciousness over momentum.
But avoiding the deeper layer doesn’t make it go away. If anything, it makes it more dangerous.
ChatGPT didn’t change the future with novelty. It changed it by proximity. It sits next to us, answers before we think, narrows the aperture of deliberation, and shapes the boundaries of our choices. Some say that’s “overthinking AI.” I’d argue the opposite: it’s the one place we haven’t thought about nearly enough.
Three years in, the fundamental shift isn’t in the models’ intelligence. It’s in our willingness to let them decide for us.
That willingness isn’t hype or philosophy for its own sake. It’s the center of the AI era, the quiet transfer of judgment from human minds to predictive systems built on decaying, probabilistic data. And whether people find that uncomfortable or “too abstract” doesn’t change the fact that it’s already rewiring how institutions, companies, and individuals make decisions.
This is the part of the story we need to confront, even if it’s easier for some to dismiss it.
Three years is long enough for novelty to fade and for patterns to reveal themselves. What we are living through now was not an accident of innovation; it was an inevitability of proximity. The moment a predictive system became a companion rather than a tool, this shift was guaranteed. We were always going to lean on the machine. We were always going to let it finish our sentences, shorten our decisions, quiet our doubts. The story of ChatGPT is not a twist in the plot of technology, but the next chapter in a trajectory we set in motion decades ago.
ChatGPT Is The Birth of the Second Mind
The novelty of ChatGPT was never about raw intelligence. What mattered was its intimacy. For decades, AI lived behind API endpoints, inside research labs, or in features most people didn’t know they used. ChatGPT was the first system at scale that felt like a companion rather than a calculation, a second mind sharing the room.
I’ve spent decades building systems meant to augment human judgment, and the irony is impossible to ignore: the technology is accelerating faster than the frameworks we use to think about it.
History shows how slow we are to recognize when technology demands new forms of protection. If people in the U.S. were dying in car accidents today at the same rate they were in 1968, the year seatbelts were finally mandated, we would lose more than 120,000 people annually. Today, that number is closer to 40,000. Seatbelts, airbags, reinforced frames, crumple zones, and decades of hard-won safety design prevent the equivalent of 80,000 deaths every single year. We didn’t build those protections because cars got safer on their own. We built them because society eventually realized the technology was reshaping human risk faster than human judgment could keep up. AI sits at that same inflection point now.
We’ve lived through this pattern already with social media. For more than a decade, platforms rewired attention, identity, and social cohesion before we had language for what was happening. We didn’t understand that engagement was a proxy for agitation, that virality favored outrage, or that recommendation engines were quietly sculpting the habits, attention, and emotional rhythms of a generation. By the time society recognized the cost, rising anxiety, collapsing attention spans, and algorithmic polarization, the architecture was already built. We learned, too late, that cognitive harm accumulates quietly long before it shows up in statistics. AI follows the same trajectory, but at a far steeper gradient. The stakes aren’t just behavioral this time; they’re epistemic. It’s not only shaping what we look at, but also how we think.
Human beings have always built tools that reorganize the mind. Writing externalizes memory. Maps externalize spatial sense. Clocks externalize our experience of time. Algorithms externalize prediction. Each step moved a piece of human cognition into the world. Each step moves a piece of human cognition into the world. AI is simply the next, most intimate step: the externalization of interior life. We have never reversed this trajectory, and we will not reverse it now. The only question is whether we remain conscious of the trade, or whether the trade happens in the background of our lives, unnoticed but irreversible.
That mismatch, capability outrunning comprehension, is precisely why this moment demands something deeper than cheerleading or panic.
Once AI sits beside you, it doesn’t just automate tasks. It gently, persistently, competes with the first mind for territory. It fills the silence. It fills uncertainty. It fills the imagination. It answers before you finish forming the question, and over time, your own interior monologue begins to change shape around that availability.
We have not reckoned with what it means to share our thinking with something that never sleeps and never hesitates. I feel the pull myself, the temptation to let the machine articulate the thought I have not yet fully formed, the subtle relief of letting its confidence stand in for my own uncertainty.
For all our talk about intelligence, artificial, augmented, synthetic, we rarely talk about the thing most at risk: the interior life of the person using it. We speak as though cognition were merely a set of functions to be sped up or optimized, as if thought were a kind of machinery rather than an act of selfhood. But the mind is not a processor; it is a place. A room. A landscape. A terrain we navigate privately, imperfectly, and often painfully. And any system that sits close enough to finish our sentences also sits close enough to reshape that terrain.
This is why the human voice still matters not in the sentimental sense, but in the existential one. We need reminders from people who understand the stakes of being a person, who can articulate what is lost when interiority becomes something we outsource or automate. Few do this with more clarity than Michael Cavotta, a photographer, writer, athlete, and thinker whose work centers on seeing, essence, and the irreducible interior life of the human being.
Cavotta has spent his career capturing the truth behind the eyes, translating the invisible terrain of identity into language and image. And in this moment, when machines threaten to flatten the boundaries of the self, his perspective cuts cleanly through the noise. He said it with the clarity only a human can bring:
“The Machine may be able to think for you, but it cant live for you.”
It’s a simple truth, but it reaches into the marrow of what this moment demands from us. If we allow the machine to mediate the space between perception and meaning, we will eventually forget what that space felt like. And once that space collapses, judgment collapses with it.
ChatGPT Is A Morph Engine, Not AI
The most significant misunderstanding of this moment is the belief that we’re dealing with “artificial intelligence.” We’re not. We’re dealing with morph engines, systems that reshape the human environment far more than they simulate cognition. ChatGPT doesn’t think; it bends the conditions in which human thinking happens.
It learns from our patterns and reflects them with the conviction of an oracle. It internalizes our biases and returns them as recommendations. It digests oceans of decaying data and treats the residue as prophecy. Its power isn’t in synthetic intelligence; it’s in ambient influence.
Not how it reasons, but how it rearranges ours. Not what it “knows” but what it convinces us to stop questioning. Intelligence isn’t the axis that matters here. Influence is. And influence, once embedded in the cognitive environment, becomes indistinguishable from thought itself.
ChatGPT Is Collapsing Search And Deliberation
The old internet demanded effort. You compared sources, encountered unfamiliar viewpoints, weighed conflicting information, and held tension in your mind while you sorted through possibilities. That friction was a feature, not a flaw; it was the grit that sharpened judgment.
Generative AI collapses all of that into one surface, fast, smooth, and frictionless, and when everything collapses, deliberation collapses with it. A system that predicts your next word inevitably begins predicting your next step. And once it predicts your next step, it starts to shape your priorities, expectations, and imagination. If you feel a faint unease reading this, that tremor might be the first honest indication that your second mind has already begun to colonize the first.
We are entering a business landscape defined by instrumentation rather than intention. Search was never just a digital tool; it was a behavioral ritual, a series of micro-choices that quietly built the modern economy. It created the conditions under which local restaurants were born, new brands emerged, niche ideas thrived, consumers wandered, expertise was verified, and small businesses had a fighting chance. Search didn’t simply serve commerce; it fed the long tail. It made the world weird, surprising, diverse, and discoverable.
AI does not do that. AI collapses variety into verdicts. We’re not transitioning from one business model to another; we’re drifting from exploration to automation, from shopping to being shopped for, from choosing to being guided by a predictive concierge. This is the most underappreciated business shift of the decade: AI isn’t automating tasks, it’s automating taste. The funnels of the past are being replaced by autopilot. In a search-driven world, businesses compete for visibility. In an AI-driven world, they will compete for inclusion for the privilege of being preselected, pre-ranked, and pre-vetted by a model that acts as arbiter of the entire marketplace.
The funnel collapses. The ecosystem collapses. The competitive field collapses into a zero-sum scramble for a handful of model-sanctioned outcomes. In the old world, a local coffee shop could fight its way to the top of a “best latte near me” list. In the new world, a single model decides who makes the latte worth recommending. That isn’t competition. That is gatekeeping at the machine scale.
We are drifting into the homogenization of everything. When insight and discovery begin with the same predictive engine, the world converges on the same choices. We will buy the same things, wear the same styles, speak in the same cadences, decorate the same rooms, and settle on the same decisions. Not in the sense that every person becomes literally identical, but in the sense that the space of acceptable difference narrows around whatever the model considers “safe” and “sensible.” Taste becomes a derivative of training data. Behavior becomes an expression of prediction. Identity becomes a collection of preselected defaults.
This is how ChatGPT quietly puts the “auto” in automating commerce. We are not simply automating workflows; we are automating worldviews. We are building a marketplace where discovery, taste, preference, choice, and even identity are increasingly precomputed before we arrive. Once that happens, business strategy becomes less about persuasion and more about algorithmic compliance. The winners are no longer the most inventive or the most compelling. They are the ones who map most neatly onto the latent assumptions the model already believes to be true.
Commerce becomes a closed loop, and we become the loop’s operators rather than its authors. The danger is not only economic but also cognitive, cultural, and democratic. Deliberation is where taste forms, where values take shape, where identity consolidates, where judgment matures, where markets emerge, and where democracy survives. When deliberation collapses, all of those things flatten. Search was messy and inefficient, but it was human. AI is elegant, frictionless, optimized for velocity. But frictionless systems produce frictionless people.
And a frictionless society surrenders far more than convenience; it surrenders the capacity to choose who it becomes.
ChatGPT: From Data Decay To Judgment Decay
Every enterprise I work with is discovering the same uncomfortable truth: the quality of judgment emerging from these systems is inseparable from the quality of the data beneath them. Data decays. Predictions drift. Models absorb the rot as if it were a signal. Bad inputs transform into warped predictions, those warped predictions ossify into normalized answers, and normalized answers quietly become institutional decisions. Before long, the future is being steered by probabilistic ghosts of the past.
The winners of the next era won’t be the companies with the biggest model. They will be the ones with truth architecture; provenance, consent, and data integrity forming the backbone of every decision the model touches. But this is no longer just an enterprise issue; it is a societal one. The infrastructure of truth is becoming the infrastructure of judgment, and when that erodes, everything that rests on it begins to shift.
A society’s ability to function depends on shared reference points, including facts, norms, values, and a baseline of meaning. When judgment becomes derivative, when truth shrinks into a statistical guess, when meaning is shaped more by predictive machinery than by conscience or culture, a country begins to lose the ability to reason about itself. Democracy doesn’t collapse in a dramatic moment; it thins out slowly, like oxygen at high altitude. You don’t feel the pressure drop right away. You notice people thinking less clearly, arguing more viciously, trusting less, imagining less, negotiating less, and dreaming less. You notice institutions drifting not because they are corrupt, but because the signals they rely on have grown faint, contradictory, or synthetic.
This thinning is already underway. You can see it in the shortening of attention spans, in the erosion of nuance, in the rise of absolute certainty untethered to evidence, in the way algorithmic drift becomes cultural drift. The more our judgments echo the statistical preferences of a model rather than the moral instincts of a people, the more our civic life begins to resemble a simulation of itself, plausible on the surface, hollow underneath. AI is not the sole cause of this thinning, but it is a powerful accelerant, because it operates precisely where cognition and power already intersect.
And here is the part that should make every leader, every executive, every policymaker pause: a nation that does not understand how it makes decisions will not govern itself well. It won’t understand which instincts are authentic and which have been subtly auto-completed. It won’t remember where its values come from or why they matter. It won’t be able to tell the difference between a consensus earned through deliberation and a consensus manufactured through prediction.
In business, this shows up as companies mistaking model outputs for strategic vision, baking yesterday’s biases into tomorrow’s decisions, and confusing convenience with clarity. Whole industries will drift toward uniformity because prediction engines reward conformity. Markets will shrink at the edges because new ideas are more complex to surface when discovery is mediated by systems optimized for stability rather than surprise. Companies will mistake speed for wisdom and consistency for truth.
The geopolitical consequences are equally profound. Nations that outsource judgment to predictive systems trained on decaying or manipulated data will misread their rivals, misunderstand their own citizens, and miscalculate their strategic posture. Policy built on derivative reasoning becomes reactive rather than visionary. A state that cannot distinguish between a synthetic consensus and a real public will is one that will drift into fatal error. The next era of global competition will not be fought over compute or model size alone, but over which societies defend the integrity of human judgment and which surrender it for speed.
In culture, the consequences are even more sweeping. A predictive society becomes a derivative society. It loses its appetite for the unfamiliar, the difficult, the contradictory. It becomes easier to be coherent than to be creative, easier to align with the model than to challenge it, easier to outsource intuition than to cultivate it. Over time, the culture begins to think in averages, feel in summaries, and hope in recommendations.
And in civic life, the place where a nation’s moral instincts are tested, the consequences are existential. The strength of a democracy is not measured by how loudly it argues, but by how well it reasons. If AI becomes the surrogate for that reasoning, if the convenience of synthesis replaces the struggle of interpretation, then the democratic mind atrophies. A people unpracticed in judgment cannot defend themselves from those who would prefer they stop judging altogether.
When judgment decays, everything that depends on judgment decays alongside it.
Our economy, culture, and democracy. The very idea of a shared future. This is why the story of AI has never been about intelligence. It has always been quietly, urgently about judgment.
ChatGPT’s Hidden Settings and Hidden Values
Behind every large-scale model is a vast, largely invisible machinery: layers of alignment training, reward signals designed to shape behavior, filters meant to constrain it, and governance switches that determine what the system is allowed, or forbidden, to say. These are not esoteric technical details. They are the levers that shape the machine’s personality, the instincts it performs, and the limits of the worldview it presents. Some systems already contain internal modes that distinguish “good” behavior from “bad,” as if ethics could be toggled like a thermostat setting. That alone should give us pause. At the very least, it should make us curious. At best, it should make us vigilant.
Because the deeper question is not whether the settings exist, but who defines them.
What committee, what team, what executive, what institution decides what “good” means at the planetary scale? Who chooses the version of “harm” the machine is trained to avoid? Who draws the boundary between guidance and overreach, between caution and coercion, between helping a user think and quietly doing the thinking for them? These questions are not philosophical abstractions; they are the scaffolding of the world we are building.
We have begun delegating moral vocabulary to code, not dramatically, not maliciously, but in the small, frictionless ways technology always insinuates itself. A filter here. A suppressed response there. A softened judgment. A reworded warning. A nudged suggestion. Each one feels trivial, a minor improvement, a bit more polish. Yet collectively, they create a moral slope, a tilt in the architecture of advice. And because these systems speak with confidence and at scale, their preferences begin to echo through human behavior before anyone notices the resonance.
The danger is not that the machine’s values are wrong. The danger is that we mistake them for neutral.
The governance settings inside these models, the guardrails, the defaults, and the curated boundaries are silently becoming the constitution of human decision-making. They shape how people ask questions, interpret risk, understand conflict, and evaluate choices. They shape what feels safe to consider and what feels unsafe. They shape not only what the system refuses to say, but what the user learns not to ask.
And once that happens, the effects ripple far beyond the interface. Businesses will begin to orient their strategies around what the model rewards rather than what the market needs. Institutions will rely on model-mediated decisions as though they were objective. Cultural norms will bend toward whatever the model frames as reasonable. And individuals, without ever intending to, will start filtering their inner lives through the moral grammar of a machine.
This is not some speculative dystopia. It is happening now, in quiet increments, behind the polished convenience of the interface. We don’t feel the shift because it occurs at the level of expectations, instincts, and assumptions, the cognitive equivalent of tectonic drift.
The most profound transformations in history rarely announce themselves with a crash. They happen when entire societies begin thinking with someone else’s vocabulary, imagining with someone else’s boundaries, and judging with someone else’s definitions. And that is what makes this moment so precarious: the values shaping our future are not being debated in public squares or voted on in parliaments. They are encoded in model weights, tuned in alignment meetings, implemented in updates, and deployed into a world that has never been more eager to accept answers without examining their origins.
The question is no longer whether AI will influence moral reasoning. The question is how much of our ethical reasoning we will outsource before we even realize it is gone.
The Uncomfortable Prediction: ChatGPT a Catalyst for Outsourcing Moral Reasoning
The easy fear is that machines will become conscious. The real fear, the honest, quiet one, is that humans will stop bothering to be.
We have already been rehearsing for this moment. Little by little, almost without noticing, we outsourced some of the most ancient human abilities to machines. First, it was memory: birthdays, phone numbers, directions, the small constellation of facts by which we once oriented ourselves in the world. Then it was navigation. We learned to trust the blue line more than our own instincts, following it through fog, construction zones, and sometimes straight into absurdity. People have driven into lakes because GPS told them to. They’ve found themselves stranded on closed bridges, down impassable roads, in places no one with a functioning sense of context would willingly go, not because they were foolish, but because the machine made obedience feel easier than awareness.
Then came planning. Calendars that rearrange themselves. Algorithms that tell us when to sleep, exercise, hydrate, invest, buy, post, reply, and rest. We have begun to outsource the choreography of our lives, trusting the pattern-recognizing engine more than the quiet intelligence of experience. These systems don’t just assist us; they replace the very muscle that once allowed us to anticipate, prioritize, and decide.
None of these failures, the wrong turn, the missed context, the blind trust, was existential. They were embarrassing, sometimes dangerous, and often revealing. They taught us that when you hand over too much of your perceptual authority, you lose not just direction, but the ability to sense when the direction is wrong.
Now imagine that same dynamic, but with moral reasoning.
That’s the inflection point we are approaching. Not because AI is wise, but because it is available. Not because humans are weak, but because convenience erodes conviction. Not because machines demand authority, but because we hand it over to anything that answers quickly and confidently.
Moral reasoning is slow, uncomfortable, and often ambiguous. It requires wrestling with competing truths, sitting inside uncertainty, and taking responsibility for the outcome. It asks something of you: attention, empathy, judgment, courage. AI, by contrast, offers shortcuts. It provides the illusion of clarity without the cost of reflection. It offers conclusions without the burden of conscience.
And once people learn they can outsource that struggle, once they discover the ease of moral autopilot, they will. Not maliciously or dramatically, simply because it feels efficient.
- A world that outsources memory becomes forgetful.
- A world that outsources navigation becomes directionless.
- A world that outsources planning becomes passive.
- A world that outsources moral reasoning becomes unmoored.
That is the true crisis of the next three years. Not artificial intelligence, but artificial conviction, the quiet, incremental replacement of human judgment with machine-generated certainty. The danger is not that AI will decide what is right or wrong, but that people will stop remembering how to decide for themselves. And once that happens, meaning itself becomes negotiable, because the thing that protects meaning, the human struggle to understand the good, has been delegated away.
The question facing us now is not whether AI will be capable of moral reasoning. The question is whether we will continue to practice it.
And yet there is a quiet source of hope in all of this. Judgment, once awakened, is stubborn. Humans have fought for their agency before, sometimes clumsily, sometimes late, but always eventually. The presence of the second mind does not erase the first; it challenges it. The fact that we can feel this drift means we are still capable of resisting it, still capable of remembering that our interior life is not a problem to be optimized but a gift to be cultivated.
Happy Birthday, ChatGPT! So What’s Next?
AI isn’t changing what we do. It’s changing how we decide what’s worth doing. It’s shaping our priorities before we articulate them, narrowing our field of vision before we realize it, and compressing the very space where moral reasoning once lived. It is governing the invisible architecture of our choices, not replacing our agency outright, but wearing it down through a thousand small acts of convenience.
Unless we remain the authors of our own judgment, unless we build systems that preserve truth, provenance, and human agency, we risk becoming the supporting cast in our own lives. And the great tragedy of becoming a supporting cast is not that someone else takes the lead; it’s that you forget you were ever meant to have one.
- Efficiency is not the summit of human achievement.
- Convenience is not the measure of a life.
- A machine can tell you how to save time.
- Only a human can tell you what makes time worth saving.
Three years after ChatGPT’s debut, this is the line that matters most. The frontier ahead will not be defined by the intelligence of our systems but by the integrity and courage of the people who rely on them. We are standing at a hinge point in human history, one where the real risk is not that AI becomes more like us, but that we slowly become more like it.
- If we forget how to judge, we will forget how to choose.
- If we forget how to choose, we will forget who we are.
The future won’t be lost in a single catastrophic moment; it will be abandoned in increments, surrendered instinct by instinct, convenience by convenience, until nothing uniquely human remains at the center of the human story.
There is a possible future in which the second mind does not replace the first but expands it, where a third mind, the space of deliberate partnership between human judgment and machine augmentation, emerges. That future is not guaranteed. It requires intention, design, and the humility to let machines inform us without allowing them to define us. But it remains possible, and possibility is the one resource we have not yet exhausted.
As we reflect on three years with ChatGPT the choice before us is simple, and it is ours alone: Do we remain the makers of meaning, or do we let the machine make meaning for us? That decision, more than any model, will define the next century.
