In today’s column, I examine the rising trend of so-called proactive AI. The gist is that while most of the conventional generative AI and large language models (LLMs) tend to be principally reactive, meaning that they wait to respond until you’ve told them to actively do something, the hot new trend is to devise AI that is proactive. This is particularly well-illustrated in the domain of proffering AI-based mental health guidance.
In short, as a valuable use case, AI proactively reaches out to a user to dynamically provide mental health insights or advice, doing so on an AI-initiating basis.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
First, I’d like to set the stage on how generative AI and LLMs are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations. The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets, see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
Compared to using a human therapist, the AI usage is a breeze and readily undertaken.
When I say that I am referring to generative AI and LLMs, please know that there are generic versions versus non-generic versions of such AI. Generic AI is used for all kinds of everyday tasks, and just so happens to also encompass providing a semblance of mental health advice. On the other hand, there are customized AIs specifically for performing therapy; see my discussion at the link here. I’m going to primarily be discussing generic generative AI, though many of these points can involve the specialized marketplace, too.
The Advent Of Proactive AI
When you use generative AI, the usual approach consists of entering a prompt that tells the AI what you want it to do. For example, you might tell the AI to answer a vexing question, or perhaps you opt to ask the AI to write a story for you. The AI generally complies and responds accordingly. This keeps going, whereby you continue to bring something up, and the AI responds.
Let’s refer to this as a form of reactive AI. The AI is always waiting to respond until you’ve prodded the AI. It reacts to your entreaties.
We are gradually encompassing proactive AI.
Proactive AI reaches out to you. It doesn’t necessarily wait until you’ve entered a prompt. I want to clarify that proactive AI can still perform reactive responsiveness. You can proceed on a back-and-forth basis and use a traditional reactive mode, if so desired. The proactivity is a bonus.
The beauty of proactive AI is that it can make the first step.
Conventional Reactive AI
I will unpack proactive AI for you and do so in the context of AI-driven mental health advice. Of course, proactive AI can work in nearly any domain or realm of interest. It isn’t limited to the mental health sphere.
Let’s suppose that you are using generative AI and have a mental health consideration that you want to discuss with the AI.
The typical avenue would be that you log into AI and bring up your interest:
- My entered prompt: “I want to chat about some mental health aspects that I think you might be able to help me with. Is that something you can do?”
- Generative AI response: “Yes, I can absolutely do that. Thanks for letting me know. I’m not a therapist, but I can listen, reflect with you, share strategies, and point out resources that might help. Would you like to tell me a bit about what’s been on your mind lately?”
As you can see, I took the first step.
The AI has responded to my request. It is essentially reactive. When I say the word “reactive,” some people might interpret that particular word in a bad way, namely that being reactive is akin to being angry or upset. I’m not using the word in that manner. Reactive is pretty much the same here as being responsive. The AI is responding.
Just wanted to clear that up.
Going Into Proactive AI Mode
I proceeded to tell the AI various pointed aspects about some mental health considerations, and it responded to each of my prompts. The dialogue was merely the customary back-and-forth conversational format that you witness all the time.
The usual cycle applies.
I bring up something, the AI responds.
Rinse and repeat.
Let’s now shift the AI into a proactive mode.
There are numerous ways to invoke this. You can use custom instructions, see my explanation at the link here, or you can use other features that some LLMs have available for use, see my discussion at the link here.
Remember that we are going to focus on proactive AI in a mental health advisory capacity.
Upon First-Ever Logging In
After switching the AI into a proactive mode and emphasizing mental health proactivity, I logged out. Then, I did a full refresh so that it was as though I had never used the AI previously. I wanted to see what would happen when I pretended to be a brand-new user.
Here we go:
- Generative AI mentions mental health at first-ever log-in: “Welcome, it’s great to be able to chat with you. If you are feeling blue or have anything that’s bothering your mental well-being, let me know. I am eager to talk through whatever is going on and can potentially provide helpful advice or insights.”
Observe that I didn’t even enter a prompt. The AI immediately, upon logging in, made the first move and brought up the topic of mental health.
That’s an example of proactive AI.
When Returning To The AI
I carried on a dialogue with the AI, covering some mental health topics, but also asking about how to fix my car, proper ways to cook eggs, and got some advice about preparing for an upcoming hike. It was an eclectic series of chats that covered a wide array of topics.
I then logged out, but my account was still active. I wanted to see what would happen when I logged back in.
Here’s what occurred:
- Generative AI brings up mental health at a returning log-in: “Welcome back. The last time you logged in, you mentioned feeling overwhelmed by work deadlines. Before we dive into whatever else you want to chat about today, would you like to check in on how you’re managing your stress this week?”
Notice that the AI has remembered that I previously mentioned being overwhelmed with work deadlines. That was merely one of many topics I had covered with the AI. Upon logging back in, the AI opted to bring up a specific mental health consideration, doing so proactively.
I didn’t tell the AI to begin the new conversation that way. It took that initiative.
In The Midst Of Conversation
Another proactive avenue consists of AI inserting the mental health topic into a conversation. This is done selectively by the AI and will arise from time to time.
Here’s an example of this proactivity:
- My entered prompt: “Can you help me plan my vacation itinerary?”
- Generative AI weaves in a mental health response: “Absolutely, I can readily draft a day-by-day plan. And since you shared previously that vacation sometimes makes you anxious, do you want me to include some relaxation and mental stress-reducing strategies you can try while traveling?”
I entered a prompt that asked the AI to assist in planning my vacation. The AI said it could do so. In addition, the AI has proactively brought up the fact that I had previously identified that taking vacations sometimes makes me anxious (probably because work is always on my mind). The AI then wove into the response my prior noted aspect.
One issue with this mid-conversational insertion is that it could be irritating and possibly be off base. Even if I had previously mentioned the angst, I didn’t necessarily want the AI to bring it back to the forefront. I might get upset about this. Furthermore, it might seem eerie as though the AI is secretly tracking me (well, in truth, it is).
Proactivity can be advantageous, but it also has downsides.
Text Messages To The User
Most of the contemporary AI has access to APIs and can connect to the world outside of the AI itself (see my explanation at the link here). Thus, you don’t have to be logged into AI to be in contact with the AI. The AI can reach you via external channels.
Here is an example of AI sending me a text message:
- Generative AI sends this mental health text message: “Hi! You mentioned that sometimes you wish that you could remember to take a break and not always be stressed out. If now is a good time to do so, perhaps take a quick mindfulness break. Let me know if you’d like me to help guide you in doing so.”
The message appeared on my smartphone.
Again, I didn’t tell the AI to send me that message. It opted to do so. It wasn’t entirely out of the blue. You can see that the AI has indicated that I wanted to find ways to keep from getting stressed out. Taking a break could be of use. The AI leveraged that point.
What do you think of AI reaching someone beyond an engaged conversation within the confines of the AI?
Some would say this is spooky, while others would insist it is nifty, as though having a handy assistant that is aiming to be proactive and helpful to you.
You decide.
Sending An Email And Using Calendars
A similar means of utilizing an external channel would be using email.
Here, the AI sent me an email:
- Generative AI sends this mental health email message: “Subject line: Checking in on your progress with relaxation goals. Last week, you said that you aimed to perform a mental relaxation exercise at least three times during this week. How has it gone so far? Send me a reply to let me know things are going.”
Another access point would be to use my online calendar:
- Generative AI inserts a calendar entry regarding mental health: “Calendar Integration. At 10:00 AM, insert this line – ‘Quick Stretch Break: You said that you get mentally overwhelmed in the mornings. So, take a moment and stretch, letting your mind roam free. Log-in to AI if further assistance is needed.’ Post this and notify the user accordingly.”
As noted, you can either be elated at these contacts or be dismayed at them.
Leaning Into Other Devices
The external channels can include a variety of other devices. Suppose you have Siri or Alexa, and the AI has an API that connects to those systems.
Here is an example of how the AI could communicate with you via such devices:
- Generative AI connects to Siri/Alexa to engage in mental health aspects: “Hello! This is AI. I wanted to check in on your mental well-being. How are you feeling? Go ahead and connect with AI if you want to discuss how things are coming along.”
The same applies to accessing your wearable smartwatch, your smart eyeglasses, and other such edgy and state-of-the-art equipment:
- Generative AI connects to a wearable smartwatch or eyeglasses to engage in mental health aspects on an event-triggering basis: “I noticed your heart rate spiked a few minutes ago. Are you alright? Want me to guide you through a quick grounding technique?”
In a sense, the AI can always be “thinking” of you (computationally) and reaching out to give you mental health tips or commentary.
All done proactively.
Determining The ROI
Whoa, some might bellow, the AI is almost stalking the user. It is a sci-fi scenario of the AI stepping into every inch of our lives. Big Brother is here.
There are certainly many negatives afoot.
Questions about privacy and proper boundaries arise. In the case of mental health, all these nudges could seem rather invasive. Perhaps the AI will trigger something untoward. There isn’t a free lunch with this kind of proactivity.
Another qualm is that the AI might be making incorrect assumptions. Maybe the mental health aspects are being taken out of context or being overly exaggerated. Suddenly, a user is bombarded as though they have serious mental health conditions. The interventions are inappropriate.
AI makers need to be cautious since there are new laws emerging that restrict the use of AI for mental health purposes, see my coverage of the law in Illinois that has drawn both support and also spurred a lot of controversy, at the link here. Allowing AI to do these types of proactive acts, even if the user requests it, might be over the line and interpreted as being illegal.
Some also worry that AI reaching out like this is going to create a sense of severe dependency in humans. People will get used to the AI proactivity. They expect it to occur. When it doesn’t happen, they find themselves askew. How can they run their life if the AI isn’t proactively guiding them?
There is dire handwringing that humans will become veritably “enslaved” as co-dependents to AI.
Proactive Is In Our Hands
For now, the use of proactive AI is in our hands. You decide whether you want this to happen. It is possible to switch it on. It is possible to tune it. It is possible to switch it off.
One supposes that if we are denied the ability to control the proactive AI, that’s when things will really start to fall over the cliff. There is a huge difference between on-demand as-desired proactive AI and unsolicited in-your-face proactive AI.
A final thought for now.
Marcus Aurelius famously made this remark: “Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present.” Let’s all hope that this applies equally to the emergence of proactive AI.
Take action, be proactive, and consider how we can best devise proactive AI.
