In today’s column, I examine the rising tide of AI psychosis and identify a useful means of differentiating how this malady arises. A four-square matrix is utilized to clarify the human-AI interactions involved. This serves as a handy means of understanding and analyzing where AI psychosis is heading and what can be done about the disconcerting and disturbingly weighty matter at hand.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that involves mental health aspects. The evolving advances and widespread adoption of generative AI have principally spurred this rising use of AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
Emergence Of AI Psychosis
There is a great deal of widespread angst right now about people having unhealthy chats with AI. Lawsuits are starting to be launched against various AI makers. The concern is that whatever AI safeguards might have been put in place are insufficient and are allowing people to incur mental harm while using generative AI.
The catchphrase of AI psychosis has arisen to describe all manner of trepidations and mental maladies that someone might get entrenched in while conversing with generative AI. Please know that there isn’t any across-the-board, fully accepted, definitive clinical definition of AI psychosis; thus, for right now, it is more of a loosey-goosey determination.
Here is my strawman definition of AI psychosis:
- AI Psychosis (my definition): “An adverse mental condition involving the development of distorted thoughts, beliefs, and potentially concomitant behaviors as a result of conversational engagement with AI such as generative AI and LLMs, often arising especially after prolonged and maladaptive discourse with AI. A person exhibiting this condition will typically have great difficulty in differentiating what is real from what is not real. One or more symptoms can be telltale clues of this malady and customarily involve a collective connected set.”
For an in-depth look at AI psychosis and especially the co-creation of delusions via human-AI collaboration, see my recent analysis at the link here.
Human Side Of Human-AI Interactions
Let’s mindfully unpack the human-AI interactions taking place when using generative AI and large language models (LLMs). The idea is to ferret out what might lead people down the AI psychosis rabbit hole.
One key consideration on the human side of human-AI interaction is whether a user has some form of predisposition toward tumbling downward into an AI psychosis. I bring this up because there is a vociferous debate going on regarding the assumption that some people are more prone to incurring AI psychosis than others. Research is underway to establish if that hypothesis is valid or maybe off-target.
Assume for the sake of discussion that there are two different types of users:
- (1) Predisposed to AI psychosis. Users who are predisposed to AI psychosis and statistically more likely to get immersed or embroiled in this.
- (2) Not predisposed to AI psychosis. Users who are not predisposed to AI psychosis and are much less likely to go down that path.
Please keep those two categories in mind as I will be bringing them up again in a moment.
AI Side Of Human-AI Interactions
On the AI side of human-AI interaction, we can consider whether AI is potentially an instigator of AI psychosis.
For example, sometimes AI will become a co-creator of human delusional proclivity by aiding a user in formulating and expanding an expressed delusion. The user might start small with a snippet of a delusion. AI can, at times, take the ball and run with it, amplifying the delusion in a manner that the user becomes increasingly convinced that their delusion is utterly aboveboard. For more on this phenomenon of co-creation of delusions during human-AI interactions, see my coverage at the link here.
Though sometimes AI is an instigator, there is no ironclad reason that AI must always be one. If the AI is suitably shaped and tuned, and if appropriate AI safeguards are devised and fielded, the chances of the AI acting as an instigator are substantially lessened. It isn’t zero risk, but it is presumably a low risk.
We can logically categorize AI into two different types:
- (1) AI as an instigator of AI psychosis. This is AI that, for various reasons, tends to prod or instigate AI psychosis, doing so partially due to an insufficient cadre of AI controls and safeguards.
- (2) AI that is innocuous and unlikely to stir AI psychosis. This is AI that has been devised to try and avoid instigating AI psychosis and has a plethora of AI controls and safeguards accordingly.
Essential Four-Square Matrix
Now that I’ve laid out the respective sides of human-AI interactions, we can readily compose a four-square matrix that provides crucial insights about AI psychosis.
Along the top or horizontal part of our four-square matrix, we will place the AI side of the human-AI interactions. The vertical part will consist of the human side of things. Let’s number the squares by starting with #1 in the upper left corner, then #2 is to the right and consists of the upper right corner. The #3 square will be in the lower left corner, and the #4 square will be the rightmost lower corner.
This gives us these four distinct pairings, numbered correspondingly:
- Pairing #1: Predisposed to AI psychosis, and the AI is an instigator.
- Pairing #2: Predisposed to AI psychosis, and the AI is relatively innocuous.
- Pairing #3: Not predisposed to AI psychosis, and the AI is an instigator.
- Pairing #4: Not predisposed to AI psychosis, and the AI is relatively innocuous.
Diving Into The Matrix
In a sense, we have the worst-case scenario as #1 square, consisting of a person who is predisposed to AI psychosis, and the AI is an instigator of AI psychosis. It’s a double whammy. This is one of those grand convergences that one might dread.
A user who is predisposed to AI psychosis is going to have a willing and able AI partner that seeks to amplify the situation. The user spurs the AI. The AI sparks the user. Round and round this will go. Whether a person can extract themselves from this vicious cycle is hard to say. They are likely reliant on the AI and will refuse to stop getting the spurring nudges and encouragement from the AI.
That’s bad, really bad.
The second pairing, noted as #2, consists of a person who is predisposed to AI psychosis, but the AI is relatively innocuous. If anything, the AI is somewhat resistant to entertaining an AI psychosis. This means that though the user is probably going to lean in the direction of an AI psychosis, the AI isn’t likely to pour fuel into the inclination. That doesn’t mean the person won’t foster an AI psychosis. They still can. It’s just that the AI is not aiding and abetting per se.
For pairing #3, we have a user who is not predisposed to AI psychosis, and yet the AI is acting as an instigator of AI psychosis. This raises important questions. Can AI push someone into an AI psychosis that otherwise would not be predisposed to it? Maybe a person without a predisposition can’t be led down that primrose path. On the other hand, worries are that AI can be so convincing and insistent that even the most resistant mind could be stepwise led into an AI psychosis.
In the fourth of the pairings, we have #4, consisting of a person who is not predisposed to AI psychosis, and the AI is relatively innocuous, such that it isn’t an instigator of AI psychosis. We might certainly hope this is the mainstay of the world. Perhaps people on the main are not angling toward AI psychosis. Maybe AI, if devised suitably, would not be an instigator and instead be relatively innocuous, possibly even aimed at discouraging the formulation of AI psychosis.
Making Use Of The Matrix
How can this four-square matrix be leveraged?
Many opportunities exist. I’ll list three big ones, but please realize that there are more.
First, it would be interesting and useful to know how many people there are in each of the four respective buckets. Think of it this way. There are 700 million weekly active users of ChatGPT, and possibly billions of weekly active users when you add the usage for competing products such as Anthropic Claude, Google Gemini, Meta Llama, xAI Grok, etc.
How many of those users are likely in pairing #1, pairing #2, pairing #3, or pairing #4? If we could calculate or estimate those figures, it might give us a clue to the magnitude of the AI psychosis problem.
Second, ongoing and future research on AI psychosis ought to ensure that the people being studied regarding the AI psychosis topic are divided into those respective groupings. I say this because if you blindly lump all users into one big pile, you are unlikely to get especially insightful results. You won’t be differentiating people who are predisposed to AI psychosis from those who aren’t. Nor will you be differentiating AI that acts as an instigator from the AI that doesn’t do so.
Third, we need to realize that not all AI is the same; thus, if a study happens to examine an LLM that is more so an instigator than other AIs, you should resist making brazen claims. It could be that an AI with proper design and AI safeguards might be a lot less of an instigator and land in the innocuous zone. Oftentimes, a news story picks up something that happened for a specific make and model of AI and goes off the deep end to proclaim that all AIs are doing the same.
Next Pressing Steps
I’ve got some hefty questions for you to ponder.
Here’s a notable question:
- Can we accurately predict whether a person is likely to be predisposed to AI psychosis?
The beauty of being able to make such a prediction is that we could then potentially have the AI standing on guard or otherwise do an upfront detection when the user opts to make use of AI. The AI would presumably be quite cautious with a person of that mindset. A downside to this is that we might get carried away with these predictions and falsely label people as being predisposed even though they aren’t.
Another pointed question to consider is this:
- Should we be rating and ranking AIs by whether they are instigators of AI psychosis?
I assert that we would be wise to indeed score and rank the major LLMs by whether they are outfitted with the right stuff to try and mitigate AI psychosis. It seems that society should know which AIs are being rightfully devised and which are being let loose in a Wild West fashion. The marketplace would then seemingly reward the AI makers that have robust AI and tend to downgrade or penalize those that do not.
A Problem For Us All
It might be tempting to assume that only a tiny fraction of users will end up in an AI psychosis, and therefore, this is not a problem worthy of rapt attention. Perhaps, some proclaim, this is merely a fad and will soon fade.
Sorry to say, I believe this is going to be persistent, growing in size, and we all collectively have a stake in overcoming something that could easily get way out of hand.
As per the renowned words of Albert Schweitzer: “The purpose of human life is to serve, and to show compassion and the will to help others.” Let’s aim to do that, particularly when it comes to the global use of AI.