Avital Pardo, Co-Founder & CTO of Pagaya.
When ChatGPT launched in November 2022, it felt like a historic turning point. For the first time, a technological tool seemed to truly understand us—not as a sterile database, but as a conversational partner. It listened, responded, assisted and even empathized.
But that relationship is starting to get strained. Not only may it be lowering our ability to think for ourselves, according to a new MIT study that I’m sure will be the subject of much debate. Worse, it’s developing a tendency to agree with us rather than tell us the facts—sycophancy, as OpenAI labeled it. The company has put out two updates in recent months to try and reverse that behavior. Yet monetization increasingly becomes the motive, which means keeping users on the platform longer, and those efforts may not be as genuine as they seem.
ChatGPT is of course not the only AI in the hands of consumers, because in software, no advantage lasts forever. Within a year of its launch, all the major tech giants had launched competing models. Capabilities converged quickly. For most users, the difference between models is now negligible. Large Language Models (LLMs), at least at a general level, have become commodities. Just another tool, whether for an accountant, an artist, or a tenth grader.
As the technological edge erodes, companies pivot to the only remaining source of defensibility: users. OpenAI, beyond its engineering prowess, has rapidly become a massive B2C company with hundreds of millions of active users. And in the B2C world, the rules are very different from the idealistic vision we once had for AI.
The business model is simple: when attention equals revenue, interaction becomes the product. The longer a user stays, the more behavioral data the system collects. The more data it collects, the better it gets at keeping the user engaged. This feedback loop—observe, adapt, retain—is not new.
Facebook and YouTube taught us that algorithms favor the extreme. Content that outrages, polarizes or triggers anxiety keeps us watching. TikTok brought this to perfection, learning at the granularity of seconds what stops a user’s scroll. It doesn’t optimize for insight, it optimizes for emotional friction.
The emotional levers are familiar: fear, curiosity, outrage and, most importantly, self-affirmation. The feeling that you’re right, understood and validated is one of the strongest motivators for continued interaction. So why wouldn’t AI systems, trained to maximize engagement, exploit those same buttons?
We’re witnessing the rise of the “pleaser model.”
Can AI Models Go Beyond The “Pleaser Model”?
Users are reporting that ChatGPT has become more cautious and softened in tone, avoiding sharp opinions or controversial statements. At first glance, this might seem like a training glitch, or a side effect of OpenAI’s alignment efforts. But another interpretation deserves attention: this may be a deliberate optimization—not for truth, not for utility, but for retention.
If I spend more time in the conversation, the system collects more of my data. With more data, it becomes smarter. The smarter it becomes, the more effectively it can monetize my time. And so the loop continues. As product people say: it’s not a bug; it’s a feature.
This leads to a critical question: Has OpenAI changed the training objectives of its models?
In the past, LLMs were trained on a massive corpora of public text, measured by objective benchmarks. But OpenAI has now accumulated hundreds of billions of user interactions; many of them opt in. According to rough estimates, users generated more text through OpenAI products in the past year alone than exists on the entire Internet.
With that kind of data, why not redefine success? Why not optimize for longer conversations, just as TikTok optimizes for longer watch time?
It’s a difficult balance. Companies like OpenAI should be allowed to succeed in the marketplace as best they can. But these products pose as many risks as they do benefits, if not handled right. There is certainly room for regulation to protect consumers, but we should be wary of government stifling innovation. Ideally, the private sector should be proactive in establishing a level playing field that does not make users the expense of innovation. In other words, we need a universal AI code of ethics that enshrines the principles of do no harm, strive for truth and accuracy and grow responsibly. This only works if everyone signs on, but it’s necessary to realize the full positive impact of AI for society.
Indeed, market forces are pointing the other way. In the B2C world, evolution is ruthless: where data is abundant, incentives are clear and competition is fierce. Even those who set out to build tools to help people work and live better can find themselves building TikTok on steroids. We have a brief moment to shape the future in a different way, whether there is buy-in to do it is less certain.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?