In today’s column, I examine an advanced use of generative AI and large language models (LLMs) that entails therapists and other mental health professionals making use of so-called digital twins that are reflective of their respective clients and patients.
The deal is this. Via the use of personas in generative AI, a feature that nearly all LLMs inherently include, it is presumably conceivable that you could devise a persona that somewhat matches and reflects a client or patient that is undergoing therapy. This is considered a digital twin, or more specifically, a medical digital twin.
Yes, perhaps unnervingly, it seems possible to construct an AI-based simulated version of a client or patient that a therapist could then use to gauge potential responses and reactions to a planned line of psychological analyses and therapeutics.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
Therapists And AI Usage
Many therapists and mental health professionals are opting to integrate AI into their practices and overtly use the AI as a therapeutic adjunct for their clients and patients (see my coverage at the link here).
Even those therapists and mental health professionals who don’t go down the route of incorporating AI are bound to encounter clients and patients who are doing so. Those clients and patients will often walk in the door with preconceived beliefs about how their therapy should go or is going, spurred and prodded by what AI has told them.
In this sense, one way or another, therapists and mental health professionals are going to ultimately be impacted by the growing use of generative AI and LLMs. Right now, there are already around 700 million weekly active users of ChatGPT. You might find it of notable interest that the top-ranked use by the public of contemporary generative AI and LLMs is to consult with the AI on mental health matters, see my coverage at the link here.
If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. Many people cannot otherwise afford or gain access to human therapists, but access to generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might undercut mental health, doing so on a massive population-level scale, see my discussion at the link here.
Personas Are Coming To The Fore
Let’s shift gears and focus on the use of AI-based personas.
I’ve repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so.
In the context of mental health, I showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here. As a mental health professional, you ought to give serious consideration to making use of personas for your own self-training and personal refinement.
For example, you might craft a persona that will pretend to be a person with deep depression. You could then use this persona to hone your therapeutic prowess regarding depression in patients and clients. It can be quite useful. Plus, there is no danger since it is just AI. You can try out various avenues to gauge what works and doesn’t work. No harm, no foul.
For my suggestions on how to write prompts that suitably create or cast personas, see the link here.
Digital Twins And Humans
There is specialized parlance in the tech field that has been around for many years and refers to the concept and practice of using computers to simulate a real object or entity. The parlance is that you are crafting and making use of a digital twin. This became popular when machinery used on factory floors could be modeled digitally.
Why would a digital model or simulation of a factory assembly machine be useful?
Easy-peasy, there are lots of crucial benefits.
One is that before you even construct the machine, you can try it out digitally. You can make sure that the machine will hopefully work suitably once it is constructed and put into operation. Another advantage is that you can readily make lengthy runs of the digital twin and predict when the real version might break down. This gives a heads-up to the maintenance crew working on the factory operations. They get estimates of the likely time at which the machine will potentially start to degrade.
Recently, there has been a realization that digital twins can be used in other, more creative ways, such as modeling or simulating human beings. This is often referred to as a medical digital twin (note that other names and phrases are sometimes used too).
Medical Digital Twins
In a research article entitled “Toward Mechanistic Medical Digital Twins” by Reinhard Laubenbacher, Fred Adler, Gary An, Filippo Castiglione, Stephen Eubank, Luis L. Fonseca, James Glazier, Tomas Helikar, Marti Jett-Tilton, Denise Kirschner, Paul Macklin6, Borna Mehrad, Beth Moore, Virginia Pasour, Ilya Shmulevich, Amber Smith, Isabel Voigt, Thomas E. Yankeelov, and Tjalf Ziemssen, Frontiers In Digital Health, March 7, 2024, these salient points were made (excerpts):
- “A fundamental challenge for personalized medicine is to capture enough of the complexity of an individual patient to determine an optimal way to keep them healthy or restore their health.”
- “This will require personalized computational models of sufficient resolution and with enough mechanistic information to provide actionable information to the clinician.”
- “Such personalized models are increasingly referred to as medical digital twins.”
- “We do not have a complete theoretical understanding of biological systems, providing a list of general principles that could form the basis of computational models, as we do for physical systems. Two other characteristic features of biological systems are genotypic and phenotypic heterogeneity across individuals and stochasticity in system dynamics.”
- “Digital twin technology for health applications is still in its infancy, and extensive research and development is required.”
Please note that as emphasized above, the advent of medical digital twins is still early on. There is plenty of controversy associated with the topic. One major qualm is that with a factory floor machine, you can pretty much model every physical and mechanical aspect, but the same can’t be said about modeling human beings. At least not yet.
Lucky or not, we seem to be more complex than everyday machines. Score a point for humankind.
Personas As Digital Twins
When you think about devising a medical digital twin, there are customarily two major elements involved:
- (1) The Body: Physiological human dynamics that need to be modeled.
- (2) The Mind: Mental human dynamics that need to be modeled.
Some would insist that you cannot adequately model the mind without also modeling the body. It’s that classic mind-body debate; see my analysis at the link here.
If you dogmatically believe that a mind is unable to be sufficiently modeled without equally modeling the body, I guess that the rest of this discussion is going to give you heartburn. Sorry about that.
We are going to make a brash assumption that you can use generative AI to aid in crafting a kind of model or simulation of a person’s mind, at least to the extent that the AI will seek to exhibit similar personality characteristics and overall psychological characteristics of the person. So, in that sense, we are going to pursue a medical digital twin that only focuses on the second of the two major elements.
Does that mean that the AI-based digital twin is missing a duality ingredient that wholly undercuts the effort?
I’m going to say that it doesn’t, but you are welcome to take the posture that it does. We can amicably agree to disagree. On a related facet, there are advocates of medical digital twins who would insist that a medical digital twin must encompass the bodily aspects, else it isn’t a medical digital twin at all. In that case, I guess we might need to drop the word “medical” from this type of digital twin.
Just wanted to give you a heads-up on these controversies.
Personas Of Your Clients Or Patients
Moving on, let’s further consider the avenue of creating a digital twin of your client or patient so that you can utilize the AI to ascertain your line of therapy and treatment.
The first step involves collecting data about the person. The odds are that a therapist will already have obtained an extensive history associated with a client or patient. Those notes and other documents could be used to feed the AI. The idea is that you will provide that data to the generative AI, and it will pattern-match and craft a persona accordingly. You might also include transcripts of your sessions. Feeding this data into AI is often done via a technique known as retrieval-augmented generation (RAG), see my explanation at the link here.
Please be very cautious in taking this type of action.
Really, really, really cautious.
Many therapists are already willy-nilly entering data about clients and patients into off-the-shelf publicly available LLMs. The problem is that there is almost no guarantee of data privacy with these AIs, and you could readily be violating confidentiality and HIPAA provisions. You might also need to certify consent from the client or patient, depending on various factors at play. For more, see my discussion at the link here and the link here.
Make sure to consult with your attorney on these serious matters.
One approach is to stridently anonymize the data so that the client or patient is unrecognizable via the data you have entered. It would be as though you are simply creating a generic persona from scratch. Whether that will pass a legal test is something your legal counsel can advise you on.
Another approach is to set up a secure private version of an LLM, but that, too, can have legal wrinkles.
More On Personas As Digital Twins
Yet another approach is to merely and shallowly describe the persona based on your overall semblance of the person.
This is somewhat similar to my earlier point that you can use personas by simply entering a prompt that the devised persona is supposed to represent a person with depression. That’s a vague indication and would seem untethered to a specific person. The downside, of course, is that the surface-level persona might not be of much help to you.
What are you going to do with whatever persona you craft?
You could try to figure out the emotional triggers of the person, as represented via the persona. What kind of coping style do they have? How does their coping mechanism react to the therapy you have in mind? All sorts of therapy-oriented strategies and tactics can be explored and assessed.
In essence, you are trying out different interventions on the persona, i.e., the digital twin. Maybe you are mulling over variations of CBT techniques and want to land on a particular approach. Perhaps you often use exposure therapy and are unsure of how that will go over with the client or patients.
This provides a no-risk means of determining your therapy in a simulated environment and prepares you for sessions with the actual person.
Don’t Fall For The Persona
I trust and hope that any therapist or mental health professional going the route of using a persona as a digital twin is going to keep their wits about themselves. Ordinary users of AI who use personas can readily go off the deep end and believe that the persona is real.
Do not let that same fate befall you.
The persona is merely the persona. Period, end of story. You cannot assume that the persona is giving you an accurate reading of the person. The AI could be completely afield in terms of how the person will actually respond and react. Expect that the AI will almost certainly overrepresent some traits, underrepresent other traits, and be convincing as it does so.
Convincingness is the trick involved. Contemporary generative AI is so seemingly fluent that you are drawn into a mental trap of believability. Inside your head, you might hear this internal voice: “It must be showing me the true inner psyche of my client or patient! The AI is working miracles at modeling the person. Wow, AI is utterly amazing.”
You must resist the urge to become over-reliant on the digital twin.
Over-reliance is a likely possibility. Here’s how. You use the persona. After doing so, you later meet with the client or patient. Everything the AI indicated as to responses and reactions appears to mirror what the person says and does during the session. Awesome. You decide to keep using the persona. Over and over, you use the persona.
Voila, you are hooked. The persona has led you down a primrose path. The seemingly uncanny portrayal has been spot-on. The problem is that when the client or patient diverges from the persona, you are going to have your mind turned backward. The person must be wrong, because the persona was always right. In other words, the person is supposed to be acting as the persona does. The world has gone topsy-turvy.
But it’s you, because you have forsaken your therapist mindset and allowed AI to capture and defeat your real-world acuity. That’s bad news. Do not let that happen.
Additional Twists And Turns
There is a lot more to consider when using AI as a digital twin in a mental health context. I’ll be covering more in a series of postings. Be on the watch.
One quick point to get your mental juices flowing is this.
Suppose that you have gotten written consent from the client or patient, and they know that you are using AI to depict a persona of them. The person comes to one of your later sessions and starts to suspect that you are proceeding as if it is based on what the AI told you. They worry that the AI is portraying them in some unpleasant fashion. Furthermore, they now insist that you let them access the persona. They want to see how it represents them.
Mull that over and think about how you would contend with that potential nightmare scenario. It’s a doozy. It could arise.
A final thought for now.
Albert Einstein famously made this remark: “My mind is my laboratory.” Yes, that’s abundantly true. In the case of mental health therapy, besides your mind being your laboratory, it turns out that AI can be your laboratory too.
Proceed with aplomb.