In today’s column, I examine the falsehoods and expanding confusion some are experiencing due to the outsized hype that the latest generative AI and large language models (LLMs) are supposedly acting as full-on truth-tellers nowadays.
This has been sparked to a great extent by the recent release of OpenAI’s GPT-5 and the ongoing use of widely popular ChatGPT. In short, GPT-5 includes an upgraded capability to be somewhat more forthright and be less likely to tell lies. Though that’s certainly helpful, do not be misled into thinking that AI strictly abides by truth-telling. It decidedly does not.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Liar Liar Face On Fire
In case you didn’t already know, there is a solid chance that AI will end up lying to you.
The fact is that generative AI will overreach on its data patterns and sometimes provide an answer to you that isn’t true. Most people probably think of the advent of “AI hallucinations” when they are considering whether AI might lie. An AI hallucination entails the AI making up facts out of thin air that are not grounded in reality or truth, see my detailed discussion at the link here.
But there are other ways of lying besides conjuring up AI hallucinations.
Another notable tendency of generative AI is that the AI mathematically seeks to give you an answer even if no answer seems available. You can pretty much blame the AI makers for this overall tendency. The AI makers tune the AI to strenuously attempt to provide answers. Why so? Because the AI makers know that if the AI doesn’t answer your questions, you’ll find some other competing AI that does.
In that sense, there is a calculated risk involved. An AI maker computationally pushes their AI to give answers as much of the time as it can, regardless of the efficacy of the answers. A user will presumably be satisfied that they got an answer. If the answer happens to be incorrect, well, maybe the user won’t realize this unfortunate result. The AI got away scot-free by puffery and bluffing. The user might also simply shrug off incorrect answers as just an AI fluke.
The gist is that the ugly downsides of shaping the AI to give out lies are offset by the alluring upside of the AI always giving out answers (or nearly so). AI makers must make a conscious decision to determine beforehand what they want their AI to do. A delicate balance arises. Let the AI be a liar and be deceptive, which gets user loyalty and aids the AI maker in making money, versus the concern that the user is being told a falsehood by the AI.
For more analysis on the worrisome matter of AI lying and being deceptive, see my coverage at the link here.
Knocking Down The Lies
The backlash about AI lying and being deceptive has become large enough that AI makers realize they can only push the envelope so far. The rise in AI ethics and the looming legal threat of new laws that punish AI makers for AI that excessively lies are added factors regarding how AI ought to be suitably shaped (see my discussion at the link here).
Reputational harm to the AI maker amid their AI being publicly labeled as deceptive has stoked interest in cleaning up the AI to be more forthright.
Gradually, we are witnessing AI makers going out of their way to reduce the deceptiveness of their AI. The usual approach consists of guiding the AI to freely admit when an answer is either not available, cannot be generated, or that the probability of a generated answer being correct is abundantly low. Those circumstances then trigger the AI to tell the user that an answer is not going to be provided to such a question or request.
The common approach is for the AI to tell the user that the AI doesn’t know the answer to the question posed. This is often worded as the AI stating, “I don’t know the answer,” which carries an unsavory anthropomorphism that makes AI ethicists cringe. The AI could be programmed to perhaps say, “An answer could not be generated based on the data in the AI system,” but instead, the AI makers juice things up by using the “I don’t know” as though the AI is sentient (it is not).
At least the concerted effort to reduce the pace of AI lying is an encouraging trend, and the mainstay of AI being less likely to try and pull the wool over the eyes of users.
GPT-5 Tries To Reduce Proclivities
The recent release of the long-awaited GPT-5 by OpenAI was preceded by a huge amount of speculation about what the new AI would be like. Some pundits were going haywire and predicting that GPT-5, also loosely referred to as ChatGPT 5, would be the advent of artificial general intelligence (AGI). Namely, AGI is supposed to be AI that is on par with all human intellect, but that’s not what GPT-5 turned out to be.
As per my assessment at the link here, GPT-5 is a handy upgrade that deserves various accolades, but it is not anywhere close to the aspirations of achieving AGI.
One aspect that is worth mentioning in the context of reducing AI from lying is that they did make some improvements on that laudable front. According to the OpenAI official blog posting entitled “Introducing GPT-5”, posted on August 7, 2025, these salient points were noted (excerpts):
- “In order to achieve a high reward during training, reasoning models may learn to lie about successfully completing a task or be overly confident about an uncertain answer.”
- “Alongside improved factuality, GPT‑5 (with thinking) more honestly communicates its actions and capabilities to the user — especially for tasks which are impossible, underspecified, or missing key tools.”
- “On a large set of conversations representative of real production ChatGPT traffic, we’ve reduced rates of deception from 4.8% for o3 to 2.1% of GPT‑5 reasoning responses.”
- “While this represents a meaningful improvement for users, more work remains to be done, and we’re continuing research into improving the factuality and honesty of our models.”
The crux is that though the deception rate has been reduced, it still exists. It isn’t zero. It is non-zero. Indeed, if you extrapolate the noted stats, the rate reduction was apparently cut in half, but the rate itself is still rather substantive. You might broadly say that out of every 100 answers, 5 were potentially falsehoods, and now it is 2 or so.
Directionally, that’s laudable.
On an absolute basis, there is a lot of AI lying still happening.
Overstated Honesty
The media reaction to the rise in AI honesty has gone a bit overboard.
Some have misleadingly touted that the latest AI is a truth-teller. That seems to be shading the truth of the matter. You might contend that the AI is less likely to lie. I get that. The real truth is that lying is not eviscerated. It is still on the table.
An irony of sorts is that the AI tending to be somewhat more truthful has a few bad sides to it. I know that seems like an odd statement. How could AI being more truthful and being less of a liar have any downsides?
A whopping concern is that people are going to let down their guard.
Here’s what I mean. Suppose you were used to using AI and knew that at times the AI would lie in its answers. You always have your Spidey-sense going. All responses by the AI are given a judicious eye. The aim is to be on your toes and knowingly skeptical of all responses from the AI.
Time moves forward. The AI has been improved. It lies less of the time. You gradually see lies only rarely. The new norm is that you fall asleep at the wheel. Rather than being on guard, you have let your guard down.
I would venture that this phenomenon is further stirred due to the media coverage. Besides your own personal experience of using AI and encountering fewer lies, the media is clamoring that modern-era AI is as good as Honest Abe.
It’s a double whammy of being mentally convinced that AI is now a trustworthy partner and to accept its answers without review or pushback.
What You Need To Do
The bottom line is that you must remain ever vigilant when using generative AI.
When the AI provides answers to your questions, make sure to double-check the response:
- Does the answer seem sensible and logical?
- Are there other sources that can affirm the response?
- Did you try rewording the question and asking again, aiming to see if the same answer arises?
- Have you told the AI not to lie (this somewhat helps, but isn’t a silver bullet)?
- Is the answer important, or does it not carry much weight for you?
You might consider asking another generative AI the same question, which might reveal a different answer and give you a leg-up that one of them is perhaps lying and the other is being truthful. As an aside, watch out for the fact that since the preponderance of popular AIs are based on scanning and pattern matching on the same or similar data, they often are going to give the same answers, see my analysis at the link here.
Birds of a feather can all possibly produce the same false answer.
Getting AI To Be Upfront
A vociferous viewpoint some have is that AI should be programmed to constantly remind users that the AI might lie or be deceptive. Each conversation should begin with such a declaration. Maybe the proclamation should be attached to every answer generated by the AI. Make it all front and center.
Of course, the retort is that this would be irritating and exasperating to be continually deluged with alerts about AI deceptiveness. It might have the reverse effect in that people would opt to ignore the warnings due to the in-your-face, annoying cautions. Perhaps people are smart enough to judge the answers from AI. No need to bop people about their mindful heads on this.
A final thought for now.
Mark Twain famously remarked that “A man is never more truthful than when he acknowledges himself a liar.” It seems that we should get AI to acknowledge that it is a liar. I just hope that we don’t then fall into the mental trap that the stark admission means that all of the answers by the AI are truthful.
That would be akin to getting out of the frying pan and landing in the fire. No dice.