Artificial advice: Why some have concerns about AI therapy
Melissa Bulicek had a big decision to make. After the air freight company that employs her announced relocation plans, the divorced mom had to choose between moving to Detroit and keeping her well-paying job or remaining in Bartlett with her three children.
“I didn’t want to lose my connection to my kids,” Bulicek said.
But she couldn’t ignore the opportunities the relocation offered. To help her decide, she turned to ChatGPT, an artificial intelligence tool developed by OpenAI.
The chatbot didn’t make a specific recommendation, but it offered pros and cons with one caveat: If there’s a chance of ruining the emotional bond with her children, ChatGPT suggested, she shouldn’t go.
Bulicek took the advice.
“I don’t have time nor funds to see a therapist any more and ChatGPT is more helpful and quicker than any therapist has ever been,” Bulicek said of her decision to seek AI assistance.
While Bulicek’s experience was positive, mental health professionals say replacing licensed therapists with AI also could be problematic and potentially dangerous.
“Even basic mental health information is incorrect,” said Carrie Summers, a psychotherapist and licensed clinical social worker with offices in Wheaton.
Summers treats clients with anxiety, depression and other issues. She also helps people experiencing life transitions, relationship challenges and narcissistic abuse.
She recently questioned ChatGPT three times about types of narcissists. She said the chatbot answered incorrectly each time, which happens more often than people think.
“What (ChatGPT) is doing is a deep-dive internet search and spitting out collectively what it finds,” Summers said.
Inaccurate information can exacerbate a client’s situation, she added.
“People don’t know to cross-check what AI is telling them,” Summers said.
AI also cannot detect a client’s physical presentation, such as body language and vocal pitch, like an in-person therapist.
Mental health professionals also have concerns about clients becoming dependent on AI programs, leading to a decline in decision-making capabilities.
As a result, they may not develop essential coping skills, said Kelly Hayden, a behavioral health therapist at Alexian Brothers Behavioral Health Hospital in Hoffman Estates.
“I don’t think (AI) was designed to be a therapeutic tool,” she said.
And while people may initially use AI for innocent reasons, the more they “engage, the more the (chatbot) may encourage them in beliefs that are harmful or not grounded in reality,” Hayden said.
As a safeguard, Gov. JB Pritzker in August signed legislation limiting the use of AI in psychotherapy.
The Wellness and Oversight for Psychological Resources Act allows mental health professionals to use AI for administrative and supplementary support services, but prohibits its use in providing mental health and therapeutic decision-making, according to Chris Slaby, spokesman for the Illinois Department of Financial and Professional Regulation. The agency oversees and licenses more than 1.2 million professionals.
“The act does not prohibit the use of AI programs as a therapeutic tool by the public,” Slaby explained via email. “It only prohibits such tools from being offered by licensed professionals who provide therapy and/or psychotherapy services.”
A mental health professional who violates the law could be fined up to $10,000, Slaby said.
“The people of Illinois deserve quality health care from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients,” IDFPR Secretary Mario Treto Jr. said in a prepared statement. “This legislation stands as our commitment to safeguarding the well-being of our residents by ensuring that mental health services are delivered by trained experts who prioritize patient care above all else.”