In today’s column, I examine the newly enacted law on AI for mental health that was signed and enacted in Illinois on August 1, 2025. It’s a doozy.
First, this is a huge shake-up for AI makers. All tech firms that make generative AI and large language models (LLMs) ought to be dialing their lawyers and getting some rapid-fire and sound legal advice. Here’s your heads-up. Any AI makers that are blissfully or ignorantly unaware of the new law, or that choose to ignore it, do so with great peril to their business and could incur both harsh financial penalties and severe reputational damage.
Second, this new law has demonstrable impacts on therapists, psychologists, psychiatrists, and mental health professionals, all told. Their present and future career efforts and healthcare practices are impacted.
Third, though the scope is confined to the State of Illinois, you can bet your cold, hard cash that similar new laws are going to be popping up in many other states. The clock is ticking. And the odds are that this type of legislation will also spur action in the U.S. Congress and potentially lead to federal laws of a like nature.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage before we get into the particulars of this newly enacted law.
You might be vaguely aware that the top-ranked use of generative AI and LLMs is to consult with the AI on mental health questions, see my analysis of this trend at the link here. This use of AI makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
Compared to using a human therapist, the AI usage is a breeze and readily undertaken.
Second, the AI will amply discuss your mental health aspects for as long as you wish. All day long, if desired. No brushback. No reluctance. No expensive meter running that is racking up hefty bills and stony fees. In fact, the AI is usually shaped to be extraordinarily positive and encouraging, so much that it acts like a sycophant and butters you up. I’ve emphasized that this mixture of over-the-top AI companionship friendship typically undercuts the tough love that often is part and parcel of proper mental health advisement, see my discussion at the link here.
Third, the AI makers find themselves in quite a pickle. The deal is this. By allowing their AI to be used for mental health purposes, they are opening the door to humongous legal liability, along with damaging reputational hits if their AI gets caught dispensing inappropriate guidance. So far, they’ve been relatively lucky and have not yet gotten severely stung by their AI serving in a therapist role.
The clock is ticking fiercely.
Taking Pressured Steps
You might wonder why the AI makers don’t just shut off the capability of their AI to produce mental health insights. That would solve the problem of the business exposures involved. Well, as noted above, this is the top attractor for people to use generative AI. It would be usurping the cash cow, or like capping an oil well that is gushing out liquid gold.
An imprudent strategy.
The next best thing to do is to attempt to minimize the risks and hope that the gusher can keep flowing.
One aspect that the AI makers have already undertaken is to emphasize in their online licensing agreements that users aren’t supposed to use the AI for mental health advice, see my coverage at the link here. The aim is that by telling users not to use the AI in this manner, perhaps the AI maker can shield itself from adverse exposure. The thing is, despite the warnings, the AI makers often do whatever they can to essentially encourage or support the use of their AI for this claimed-to-be don’t use capacity.
Some would insist this is a wink-wink of trying to play both sides of the gambit at the same time, see my discussion at the link here.
In any case, AI makers are cognizant that since they are allowing their AI to be used for therapy, they ought to try and keep the AI somewhat in check. This might minimize their risks or at least be later evidence that they made a yeoman’s effort to do the right thing. Meanwhile, they can hold their head high in taking overt steps to seemingly reduce the potential for harm and improve the chances of being beneficial.
Therapists Adopting AI
Therapists are realizing that they, too, must consider adopting the use of AI.
The reason is straightforward. Potential clients and patients are walking in the door with claimed-to-be cognitive diagnoses that AI has generated for them. Some therapists tell their clients and patients to simply ignore the AI. But that doesn’t usually do much good, since people will indubitably go behind the back of their therapist and access AI anyway. For more on the ins and outs of therapists using AI for mental health, see my analysis at the link here.
An emerging strategy for therapists is to avidly adopt the use of AI into their practices. It’s the proverbial if you can’t beat them, join them refrain. The march toward AI is unstoppable.
They generally do so in these two major ways:
- (1) Administrative uses of AI such as billing, scheduling, etc.
- (2) Therapeutic use of AI as an adjunct to the human therapy taking place.
An administrative use of AI by a therapist is generally less concerning than when using AI for therapeutic purposes. Assuming that the administrative use of AI is done with proper security and rigor, most clients or patients won’t especially care that the therapist is using AI in that manner. The assumption is that the AI streamlines the business side and hopefully reduces fees.
The controversial angle is the use of AI as an arm of the therapist. Some therapists say that choosing to use AI is a big mistake and that the cherished therapist-client dyad bond should remain untouched and unchanged. Others, such as my advocacy, assert that we are heading into an era of a new triad, consisting of a therapist-AI-client relationship. It is inevitable and unavoidable. See my coverage at the link here.
The Law Gap Is Closing Fast
Consider then that we have two potential overarching issues brewing:
- (1) Therapists using AI for therapy but maybe doing so unwisely.
- (2) AI makers allowing their AI to be used for therapy but without any semblance of necessary controls or other safekeeping measures.
It would be possible to establish regulations that could be a means of dealing with one or both of those brewing concerns. Lawmakers could opt to formalize legal conditions associated with how therapists lean into AI. That could be a consideration all by itself. Likewise, a circumstance all by itself could be the matter of regulating AI makers about allowing their AI to wantonly provide mental health advice.
A double whammy would be to tackle both tough topics in one fell swoop.
Illinois has taken that tack by having devised and passed into law the new Wellness and Oversight for Psychological Resources Act. In a sense, this new law not only has to do with Illinois, but it is also a bellwether of how AI for mental health is possibly going to be regulated.
Often, regulations of one kind or another start in one state and then are reused or recast when other states opt to do something similar. They might take the language used in the already passed law and use that as a draft for their own proposed law. Some language gets changed, new language is added, and so on. The first law to get approved often serves as a template or model.
Besides the various states enacting their own laws, there is often a dynamic that gets the federal government to also pursue the same or similar regulation. Once again, the initial state law might be an illuminating example. Questions naturally arise on how to best reshape a state-specific law into a law that might be suitable across the board as a federal law.
Impacts Are Plenty
Let’s go ahead and take a quick peek at the Illinois law and see what we can make of it. I will share just some mindfully chosen snippets and give you a taste of what the law contains. Please know that the law has numerous twists and turns. Also, my commentary is merely a layman’s viewpoint. Make sure to consult with your attorney to garner the legal ramifications of whatever your own situation entails.
According to the Wellness and Oversight for Psychological Resources Act, known as HB1806, these two elements are a core consideration (excerpts):
- “The purpose of this Act is to safeguard individuals seeking therapy or psychotherapy services by ensuring these services are delivered by qualified, licensed, or certified professionals.”
- “This Act is intended to protect consumers from unlicensed or unqualified providers, including unregulated artificial intelligence systems, while respecting individual choice and access to community-based and faith-based mental health support.”
As you might readily observe, the first point indicates that the Act is intended to focus on therapy that is undertaken by a professional. If you are a mental health advisor of any licensed variety in Illinois or potentially have clients or patients in Illinois, you ought to carefully digest this new law and make sure you do not run afoul of it. I would wager that later trying to claim that you didn’t know of the law won’t be a powerful excuse.
As the old saw goes, ignorance of the law excuses no man (person).
The second point above indicates that the Act is intended to deal with unregulated artificial intelligence systems. The idea is that beyond the realm of professional therapists, this Act reaches into the arena of consumers and the public coming into contact with AI that purports to provide mental health advice.
There is a bit of an interesting thought here.
There is AI that is devised intentionally to be a mental health advisor, which differs from AI that perchance is generally used to obtain mental health advice and is a generic generative AI. I bring this up because an AI app that is purpose-built for mental health advisement might be somewhat askew of this law and outside of scope if otherwise regulated in some other fashion, such as FDA regulatory oversight.
To some degree, this could give a welcome kick-start and boost to start-ups pursuing a from-scratch AI mental health app, which I’ve discussed at length at the link here.
When Generic AI Does Mental Health
Regarding the use of unregulated AI in this realm, a crucial statement about AI usage for mental health purposes is stated this way in the Act (excerpt):
- “An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.”
There are varying ways to interpret this wording.
One interpretation is that if an AI maker has a generic generative AI that just so happens to also entail providing mental health advice, and if that is taking place absent the tutelage of a licensed professional, and this occurs in Illinois, the AI maker is seemingly in violation of this law. The AI maker might not even be advertising that their AI can be used that way, but all it takes is for the AI to act in such a manner (since it provides or offers as such).
An AI maker might clamor that they aren’t offering therapy or psychotherapy services. It is merely AI that interacts with people on a wide variety of topics. Period, end of story. The likely retort is that if your AI is giving out mental health advice, it falls within the rubric (attorneys will have a field day on this).
A somewhat interesting potential loophole that seems to be baked into this wording is that the language says “the use of Internet-based artificial intelligence” is at play. As I’ve noted in my writings, we are heading toward SLM (small language models) that can exist entirely on a smartphone and are not considered Internet-based per se, especially for mental health guidance, see the link here.
This is all fodder for legal beagles, that’s for sure.
Consumer Consent
On the consumer side of things, I mentioned earlier herein that AI makers often have a somewhat hidden or buried clause in their online agreements that stipulates you aren’t supposed to use their AI for mental health purposes. This might also stipulate that if you do so, despite the warning, you are responsible and they aren’t.
The Act somewhat addresses this form of trickery (excerpt):
- “Consent does not include an agreement that is obtained by the following: (1) the acceptance of a general or broad terms of use agreement or a similar document that contains descriptions of artificial intelligence along with other unrelated information; (2) an individual hovering over, muting, pausing, or closing a given piece of digital content; or (3) an agreement obtained through the use of deceptive actions.”
Ponder that clause.
I’m sure that cunning lawyers will try to find a means of worming out of that phrasing, doing so on behalf of their AI maker client that they are legally representing. It will be fascinating to see if the wording is strong enough in this Act to catch most of the AI makers.
For example, suppose an AI maker claims that users have consented to using the AI mental health facets by creating an account for using the generative AI. Well, perhaps the first portion that says acceptance of general or broad terms won’t let that contention fly by. Furthermore, it might be argued that by burying the online agreement several webpages deep, perhaps that’s a form of “deception” in trying to prevail over the user via obscurity.
Legal battles are going to earn lawyers a bundle.
Penalties To Be Had
Laws usually don’t motivate people unless there is some form of penalty attached to violating the law. In addition, if the penalty is considered low or relatively inconsequential, there is less incentive to abide by the law. You can just violate the law and not care that some seemingly insignificant penalty might arise.
AI makers are companies that often are rolling in dough, sometimes encased in billions of dollars. They might opt to just allow the penalties to occur and otherwise take a tiny chunk of their cash out of their hoard, considering it a kind of everyday cost of doing business.
The Act says this about penalties (excerpt):
- “Any individual, corporation, or entity found in violation of this Act shall pay a civil penalty to the Department in an amount not to exceed $10,000 per violation, as determined by the Department, with penalties assessed based on the degree of harm and the circumstances of the violation.”
Does $10,000 maximum penalty per violation seem like a lot, a little, or what?
If you are a therapist in a small practice, I’m sure that a potential $10,000 penalty is going to hurt. Plus, keep in mind that the penalty is per each violation. A therapist who runs afoul of the law in terms of their use of AI is possibly going to have numerous violations at hand. Multiply the potential maximum by the number of violations, and things can get big in a hurry.
A billion-dollar-sized AI maker eats $10,000 for breakfast (it’s a teensy number); thus, the penalty might be something they would simply sneeze at. The issue is that this is on a per-violation basis. Suppose there are thousands upon thousands of those in Illinois who use generic, unregulated generative AI. Each time they use it for mental health might be construed as a considered violation. Day after day. Week after week.
Once again, the numbers could potentially add up, though admittedly, it still might not raise the blood pressure of some high-tech Richie Rich.
What Therapists Cannot Do
Shifting to the therapist side, here is what therapists cannot do (excerpt):
- “A licensed professional may not allow artificial intelligence to do any of the following: (1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.”
In my view, this is rather unfortunate wording, and the law has gone a bridge too far.
It verges on being so encompassing that therapists opting to astutely incorporate generative AI into the therapeutic aspects of their practice are going to be at undue risk. Allow me a moment to elaborate.
First, an upbeat note. The notion that AI shouldn’t be used to make independent therapeutic decisions is certainly aboveboard and sensible. A human therapist should not hand over the reins to AI. That’s a worthy aspect and will hold therapists’ feet to the fire who are lazy or inept at integrating AI into the therapy realm.
Next, a downbeat note. The line that the AI cannot “directly interact with clients in any form of therapeutic communication” is regrettably misleading and overly onerous. A therapist could legitimately have AI interacting with clients while the client is at home or elsewhere, doing some follow-up homework under the overall guidance of the therapist. How this is worded is an overstated catch-all. It will chase many therapists away from using AI in a manner that can be highly productive, merely because the wording is like a sword dangling over their heads.
Sad face.
I would have strongly urged different wording that could have achieved the desired intentions but allowed for reasonable permissibility. I also have great heartburn over the aspect that the AI cannot be allowed to “detect emotions or mental states” – this, again, is overly broad and flies in the face of suitable use of such AI technology when under the eye of a watchful therapist.
What Therapists Can Do
In terms of what therapists are allowed to do with AI, as per this Act, it boils down to primarily using AI for the administrative tasks of their practice. The therapy-related elements are so entangled in this law that it seems to put a hefty damper on using AI as a therapist’s tool. That’s a bit of a downer when it comes to making progress in the practice of therapy and acknowledging that AI has a substantive role now and in the future. See my discussion at the link here.
The Act says this about the mainstay of AI use for therapists (excerpts):
- “Administrative support means tasks performed to assist a licensed professional in the delivery of therapy or psychotherapy services that do not involve communication. Administrative support includes, but is not limited to, the following: (1) managing appointment scheduling and reminders; (2) processing billing and insurance claims; and (3) drafting general communications related to therapy logistics that do not include therapeutic advice.”
That’s pretty much run-of-the-mill stuff.
The Bottom Line
Let’s distinguish the two paths underway, namely, AI used by therapists versus the use of AI by consumers on their own volition.
We want mental health professionals to use AI in sound ways, especially so on the therapy side of things. Having proper guidance for this purpose is good. Setting sensible boundaries is useful. Going too far on wanting to rein this in is disconcerting and adverse, perhaps spurring an unintentional, unsavory consequence. Squashing or heavily stifling innovation in mental health is not the way we should be headed.
Thoughtful and guarded adoption of AI is warranted and saluted. I vote that any laws related to therapists’ use of AI for therapy ought to be of a balanced nature.
Turning to the other path involved, an ongoing debate entails whether the use of AI for mental health advisement on a population-level basis via the auspices of generic generative AI is going to be a positive outcome or a negative outcome.
If AI can do a proper job on this heady task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to AI is generally plentiful in comparison. It could be that AI for mental health will greatly benefit the mental status of humankind. A dour counterargument is that AI might be the worst destroyer of mental health in the history of humanity. See my analysis of the potential widespread impacts at the link here.
Summarily cutting off that usage by going after the AI makers on a blanket basis without any seeming room to enable a prudent means of doing this seems like a rather sharply chosen decision. Apparently, it’s either all bad or so bad that the bad markedly outweighs the good.
That is quite a brazen ROI calculation and deserves more public discourse.
Be Cautious Of Templates
Any other entity, whether at the state, federal, or local level, should be extremely cautious in construing this new law as a ready-to-go template. It has some upsides. It has some disappointing and disconcerting downsides. Please do not blindly do a copy-and-paste.
Reality dictates that AI is here to stay. Actually, AI usage is going to rapidly continue to expand and deepen. If you perceive the use of AI in mental health as a wild horse, so be it, but we need to recognize that a horse is a horse. Horses have tremendous qualities. We are somewhat already letting the horse out of the barn on AI and mental health. I’ve said that many times. It’s a definite concern.
My erstwhile take is that we need to properly and suitably harness the horse. That is the ticket to success. Humans are in great need of mental health guidance, and the labor of human therapists is woefully insufficient to handle the burgeoning need. AI is a tremendous scaling factor.
Harness AI in a manner that will benefit the mental wherewithal of society. As Ralph Waldo Emerson famously stated: “Unless you try to do something beyond what you have already mastered, you will never grow.”