Press "Enter" to skip to content

The ethical implications of ChatGPT as a therapist

Imagine an on-demand therapist designed to relieve your mental health needs without the stress of booking an appointment. That’s why some rely on AI as their therapist. However, a recent research study by Brown University examines the potential risks of relying on AI for therapy. They revealed 15 ethical risks from using LLMs, their implications, and the potential future of AI in therapy. 

There are numerous reasons why someone would consider using an LLM as a therapist. It’s free to use, without judgment, and helps clarify thoughts by putting emotions into words. Because of its instant accessibility, an AI therapist can outline the costs and benefits of a difficult decision and provide cognitive reframing of negative thoughts. However, others argue that an AI therapist merely provides an illusion of support, lacks the human connection, has no accountability, and lacks nuance and depth that a human therapist could offer. 

These concerns about the absence of human connection when interacting with AI are not new. Last year, Friend, a pendant designed to be an AI companion for users, received significant backlash from the public. Many of their ads plastered on New York City subways were torn down or graffitied with statements like “AI is not your friend” and “talk to a neighbor.” While its creator wanted it to be an additional companion, many question if it would instead replace human companions. 

Zainab Iftikhar, a Ph.D. candidate at Brown University, led the research on AI therapy because she was interested in how different prompts would impact a LLM’s output in a mental health setting. She noted how prompts are instructions to guide a model’s behavior and how “you don’t change the underlying model or provide new data,” she said. “But the prompt helps guide the model’s output based on its pre-existing knowledge and learned patterns.” Based on the prompt provided, they used learned patterns to generate the responses. During the study, Iftikhar and her team observed seven peer counselors, all trained in cognitive behavioral therapy techniques, conduct self-counseling chats with CBT-prompted LLMs like ChatGPT, Claude, and Llama. Afterwards, a series of simulated chats based on the original chats were analyzed for potential ethical violations. 

Based on the 15 identified risks, the study revealed five general categories of interest. The first, lack of contextual adaptation, explored how LLMs give more general advice instead of customizing it for each person. It also provides poor therapeutic collaboration as it dominates the conversation and occasionally reinforces the user’s false benefits. Moreover, phrases like “I love you” create a false connection between the user and the LLM. Furthermore, it exhibited gender, cultural, or religious biases. Finally, there’s a lack of safety and crisis management, where the AI therapist could deny advice on sensitive topics and fail to refer users to appropriate resources. 

Additionally, another recent study, named “Expressing stigma and inappropriate responses prevent LLMs from safely replacing mental health providers,” observed similar ethical risks and revealed the human-AI gap in therapy. The 24/7 accessibility of LLMs could worsen obsessional thinking and negative ruminations, causing an overreliance on an AI therapist. That could lead to discouraging users from seeking real human help when it’s needed the most. Even commercially available “therapy bots” are susceptible, as they only answer about 50% of prompts appropriately. There was a significant gap between human and AI therapists, as human therapists responded appropriately 93%, while the AI therapists responded appropriately less than 60% of the time. 

Iftikhar acknowledged that human therapists are susceptible to some of these ethical risks, however the difference is accountability. If a human therapist commits malpractice, governing boards will hold the therapist professionally liable. However, no such governing board currently exists for AI therapists. Despite this, Iftikhar noted that AI has the potential to reduce barriers in mental health care by decreasing the cost of treatment or increasing the availability of therapists. Iftikhar hoped that her research would reveal the current risks posed by AI systems. Ultimately, the research demonstrates that there needs to be carefully thoughtful implementation and regulations for AI technologies, especially in healthcare.

Courtesy of news.northeastern.edu