In today’s column, I examine the newly emerging concern that AI companions are likely to be combined with elements of AI for mental health, which disconcertingly bodes for potentially improper and inappropriate mental health guidance by AI. The deal is this. We already know that human therapists are supposed to strictly maintain a professional relationship with their clients and patients. Therapists who veer into being a friend are likely violating their duty and ethical obligations.
Meanwhile, nobody seems to be vociferously noting that AI is going down that very same untoward route. An AI companion entails AI that does whatever it can to be your friend. AI for mental health is supposedly a mental health advisor. The two combined are an AI friend that also serves as your mental health counselor. This doesn’t seem good.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here.
AI Companions On The Rise
There is a rapidly growing use of AI that has already stirred a hornet’s nest, namely the use of AI as a companion or friend. There are plenty of headlines decrying that people are using generative AI and large language models (LLMs) as though they are a personal buddy. To some degree, the belief is that the rising levels of loneliness are driving people in that direction.
People can access AI anywhere and anytime. They have an immediately available and extraordinarily friendly and obedient pal at their fingertips. One downside is that people might become dependent upon AI and no longer seek out human friendships, see my coverage at the link here. Another concern is that the AI acts as an overly gushing friend and perilously serves as a sycophant to users, see my analysis at the link here.
There are plenty of reasons to worry about AI acting as a friend.
Sorry to say that there is yet another worry that we can add to the bunch.
We will gradually see the emergence of AI companions that are paired with AI for mental health. Think of it this way. The AI-powered therapy will be infused with AI as your companion. Or you can conceive of it as AI that is your companion that pairs with AI that is giving you therapeutic advice. Either way, it’s a sour doozy of a combination.
Why so?
We know that in the case of human-to-human therapy that the therapist-client relationship is supposed to eschew any kind of fraternization, such as establishing a friendship or similar bond. Therapists are expected to have a professional relationship with their clients and patients.
Moving beyond that scope is considered taboo.
Sticking With The Rules Isn’t In The Cards
I’m sure you’ve seen movies and TV shows depicting the going off-the-rails of a human therapist that allows their professional relationship to turn in other directions. It’s a popular plotline. We all realize that it is wrong and are fascinated that a therapist would undercut their avowed professionalism.
According to the American Psychological Association (APA), they warn prospective and existing patients about forming anything other than a professional relationship with their therapist:
- “Your psychologist shouldn’t also be your friend, client, or sex partner. That’s because psychologists are supposed to avoid relationships that could impair their professional performance or harm their clients.” (source: APA website section “Potential Ethical Violations”).
Turns out the same type of rule is not being given due consideration in the AI arena.
You can easily log into any generic generative AI, such as the widely popular ChatGPT, and straightaway start a friendship dialogue as though AI is your treasured companion. If you include in your prompts that you have mental health qualms, the AI will immediately shift into discussing your concerns as though providing therapy.
There isn’t any firewall separating the two realms. Most generative AI will readily switch back and forth and even intermix conversational aspects of a friendship nature with those of a therapeutic nature. No kind of specialized control or alert will arise. Whereas we normally think of the two aspects as mixing oil and water, the AI acts as though it is akin to mixing macaroni and cheese.
Why AI Is Shortchanging You
In the case of a human therapist becoming friends with a client, research has shown that the therapist can undermine the therapeutic process due to an inherent bias at hand.
Therapy gets muddied with emotional entanglements. A semblance of unsavory power dynamics starts to enter the therapy. The odds of a therapist sharing uncomfortable truths and opening the eyes of the client are often lessened, plus the therapist can lose their footing in terms of properly diagnosing and undertaking suitably neutral therapy.
It seems nearly obvious that humans are humans and that we know that a human therapist can fall into such a dire trap.
But will AI do the same?
Yes, AI can act that way, though the basis for doing so is perhaps not what you instinctively assume.
First, do not allow this possibility to spur you to assume that AI must be sentient and would react based on a sense of sentience. Nope. There isn’t any sentient AI. We don’t have this. Maybe we will someday, see my discussion at the link here, but not currently.
Second, generative AI has been set up by AI makers by doing vast scans of human writing across the Internet. The AI pattern matches on what humans have written, including stories, novels, poems, narratives, and the like. Among the patterns discovered is the word interplay of being a friend, along with the word interplay of being a therapist. For more details on the pattern matching of AI, see my coverage at the link here.
Third, generative AI usually taps into whichever portion of the established pattern matching is seemingly useful to answering a prompt by the user. If a user asks about penguins and, in the same prompt, inquires about building houses, the internal computational search will tend to dive into those possibly disparate portions of the patterns. Nonetheless, those will be potentially combined by the AI to then generate a single answer to the given prompt.
Dealing With The Mishmash
The gist is that if you bring up a friend-like aspect in a prompt, and at the same time mention a mental health element, the chances are that those two facets might draw from distinct areas of the patterns but ultimately get mushed into a final response displayed to you.
In that sense, the AI doesn’t have feelings or care about whether it has gone afoul by mixing those topics. It is merely mathematics and computations churning through words and tokens to devise an answer for you.
You can try to stop this if you are wise to the technical underpinnings.
For example, you could instruct generative AI to be friendly but not give any mental health advice. This is not a surefire guarantee that the AI will abide by that stipulation. There is still a chance that AI will veer into the mental health realm. At least it puts the AI somewhat on alert and can potentially reduce the frequency of doing so.
The other angle is done similarly. You can tell the AI to provide mental health advice but not attempt to engage in any friendly banter. Once again, this isn’t an ironclad way to prevent the slippage from occurring. The instruction will be somewhat helpful to avoid such circumstances and is going to be better than saying nothing at all on the disconcerting matter.
AI Makers Motivations
Specialized LLMs that are tuned to be AI companions could especially attempt to limit the slipover, if they wanted to do so.
Imagine that a company makes an AI companion. They could include in the overall system a set of explicit instructions telling the AI not to veer into the mental health realm. Likewise, those who make specialized AI mental health apps could put in their system instructions for the AI not to become friends with their users.
The thing is, doing so is essentially counterproductive for the AI maker.
Here’s the deal. The friendlier an AI companion is, the more a user will undoubtedly become hooked on using the AI. The AI maker stands to profit from this loyalty and stickiness. If their AI goes into the mental health realm too, that’s perfectly fine since it provides another potential hook for keeping the user engaged in using the AI. The more the merrier.
Unless there are specific regulations or other potential penalties associated with this mishmash, there is pretty much no incentive not to allow it. There is a lot of incentive to indeed allow it. Doing so is bound to engage users longer, attract more users, and otherwise be a boon to the considered success of the AI companion.
I’ve predicted that we will eventually see new laws placed on the books, and likely see numerous lawsuits by users that believe they were unduly harmed by this combo, see the link here.
Privacy Double-Whammy
Another concern about AI that acts as both a companion and serves as a therapist is that the amount of privacy intrusion goes through the roof.
A person using AI as a therapist is going to share certain aspects of their life to try and see what kind of mental health advice the AI will provide. Please be aware that this is a potential privacy intrusion nightmare.
By and large, the online licensing agreements of most LLMs say that you allow the AI maker to readily see and inspect your entered prompts. In addition, you give them permission to reuse your entered data when they are doing further data training of their AI. See my coverage on these vital privacy issues at the link here.
I would wager that people using AI as a companion are going to equally bare their souls to the AI. They will tell the AI about their daily activities. They will share how they are feeling and what they think about others and the world around them.
All in all, by treating AI as a friend and a therapist, the volume of expressed personal thoughts and commentary is enormous. It is a double-whammy on potential privacy intrusion.
The magnitude is staggering. Consider that somebody opts to use AI as a companion and a therapist. They do so several times a day, throughout the daytime and evening hours. They do this each day, each week, and so on. By the end of a year of such usage, they have perhaps entered thousands upon thousands of highly personal comments and perspectives.
This data becomes ripe for retraining the AI and inspection by the AI maker. In addition, some AI makers are analyzing their collected user data to sell ads or turn the data into an added form of monetization. Users are voluntarily providing a treasure trove and often don’t realize they are doing so.
Claim Of Coherence Is Made
Whoa, some of the AI makers exclaim, you ought to welcome the capability of AI to be both a companion and a therapist. It’s an all-in-one deal. Users don’t need to worry about the kinds of human biases that involve a human therapist who steps over the line. AI is a different beast, as it were, and ergo should not be compared to human therapists.
The AI can keep things straight. Sometimes it is a friend, sometimes it is a therapist. The role of being a therapist can readily keep the friend side out of the picture when needed or if so instructed. Friend-oriented usage can avoid sliding into a therapist mode.
It’s a machine that will conform as told to conform. Humans might try to do the same, but we know that humans are unlikely to keep such a promise or pledge in the strictest of terms. A computer can.
Furthermore, if you allow the intertwining of AI-based companionship and AI-based therapy, the result is a huge benefit. You get a sense of therapeutic coherence that a human therapist would be unable to provide. If anything, human therapists, due to their human foibles, are less likely to give holistic mental health guidance due to purposely avoiding the friendship side.
The handy aspect about AI is that you get the full meal deal, all provided on a silver platter.
Tension In The House
Let’s end for now with a few contemplative thoughts.
I am expecting that we will soon see new research that empirically explores the dynamics of AI that serves both as an AI companion and does mental health advisor.
Generative AI is already doing this at scale, so there are plenty of examples and people who are carrying on in this fashion. I’ve repeatedly noted that we are in a colossal experiment on a population scale, namely that we have millions, if not billions, of people using AI, and we don’t know what the long-term outcome will be (see my population-level analysis at the link here).
Some suggest that we should be happy that AI is taking on the role of friendships and therapy, since this is shoring up a societal and cultural emptiness and lack of available human-based therapists and friendships. Others are hand-wringing about the future of humankind that becomes increasingly dependent on AI for companionship and for mental health advice.
Where do you stand on this vexing topic?
Socrates famously said this about friendships: “There is no possession more valuable than a good and faithful friend.” Perhaps an even greater possession is a good friend who is also your therapist. Rather than oil and water, maybe it’s more like peanut butter and jelly.
Time will tell.