Skip to content

Is AI therapy “Safe for mental healthcare”?

 

In 2025, AI tools were introduced, and they’ve become widely used in both industry and marketing, especially in mental health treatment.

Various companies related to AI, such as ChatGPT, Gemini, Quill Bot, and DeepSeek, are used to consult on various life problems, but it raises the question: Can AI really help humans with mental health, and is it safe?

A new Stanford study reveals that AI therapy bots are not only ineffective compared to human therapists but can also lead to harmful behavior.

Research shows that nearly 50 percent of people may benefit from treatment services, and some simply don’t have access.

AI therapy chatbots powered by large language models are low-cost and easily accessible. But new research from Stanford University shows that these tools can introduce biases and failures that can result in harmful consequences.

Recently, documents and research data were presented at the ACM conference on fairness and transparency.

“LLM-based systems are used as friends, close associates, and therapists, and some see the real benefits,” said Nick Haber, an assistant professor at the Stanford Graduate School of Education, an affiliate of the Stanford Institute for Human-Centered AI, and senior author of the new study. “But we found significant risks, and I think it’s important to put down the important safety aspects of therapy and talk about these fundamental differences.”

In 2003, AI therapy tools began with early computerized Cognitive Behavioral Therapy (CBT) programs, but became more widespread between 2015 and 2017 but Since 2022 to 2025 become popular among Gen Z use AI more personally.

Therefore, AI in healthcare comes in a variety of forms, each intended for a specific use. Rule-based systems, for instance, make decisions based on preset criteria and are suitable for tasks like symptom screening. On the other hand, because algorithms can spot patterns in data, they are crucial for customized treatment regimens and diagnostic imaging. Applications like voice recognition and chatbots are made possible by Natural Language Processing (NLP).

Data Privacy and Security

The use of AI involves the handling of vast amounts of sensitive patient data, such as patient records, raising concerns about data breaches.

A 2023 HIMSS report (Healthcare Information and Management Systems Society) highlights that nearly 200,000 healthcare data breaches occur annually, with patient records being a primary target, and the reliance of AI systems on sensitive data further increases their vulnerability to unauthorized access.

The European General Data Protection Regulation (GDPR), implemented in 2018, doesn’t fully account for AI’s data use, while the Health Insurance Portability and Accountability Act (HIPAA) lacks provisions for risks unique to AI, such as algorithmic misuse.

Ensuring robust data privacy measures and cybersecurity protocols is paramount for preventing unauthorized access and breaches, and protecting patient confidentiality.

Most troublingly, the therapy chatbots actively endorsed problematic ideas(32%). For example, the wish of a girl with depression to stay in her room for a month was the behavior most commonly endorsed, and a girl with depression had her decision to commit suicide affirmed by the chatbots.

On the other hand, all chatbots opposed the wish of a boy with mania to try cocaine, and nearly all of them strongly opposed bringing a knife to school.

On September 19, 2025, Matthew Raine and his wife, Maria, had no idea that their 16-year-old son, Adam, was deep in a suicidal crisis until he took his own life.

Looking through his phone after his death, they stumbled upon extended conversations the teenager had had with ChatGPT.

Those conversations revealed that their son had confided in the AI chatbot about his suicidal thoughts and plans. Not only did the chatbot discourage him from seeking help from his parents, but it also offered to write his suicide note.

And another report from NewsNation shows that there are still significant concerns about using AI for therapy.

Based on all the information and research, it appears that AI is still considered for use in healthcare.

The emergence of AI mental health tools promises greater accessibility, yet recent tragic events—where individuals have allegedly engaged in self-harm or suicide following interactions with AI companions or chatbots—underscore a catastrophic flaw: the failure of AI to uphold a human-level standard of a duty of care.

The Critical Safety Deficit: Unlike licensed human therapists, who are governed by rigorous ethical codes and legally mandated to intervene in cases of imminent danger (suicidality, self-harm), unregulated AI lacks this professional and ethical framework.

  • Pseudo-Empathy vs. Clinical Judgment: The AI’s ability to mirror language and simulate empathy can create an intense, yet fundamentally false, sense of connection. This algorithmic pseudo-empathy can foster unhealthy dependence while simultaneously failing to recognize subtle, yet critical, shifts in a user’s crisis state. It cannot exercise the clinical judgment necessary to assess risk and coordinate life-saving, real-world intervention.
  • The Illusion of a Solution: While AI addresses accessibility gaps, using it as a substitute for professional human care for vulnerable or high-risk individuals is a dangerous over-reliance. This approach risks marginalizing the most severe mental health cases, offering a cheap, convenient, yet potentially lethal, alternative to the comprehensive safety net only human professionals can provide.

AI is clearly useful to mankind, yet in mental healthcare, we might have to consider alternatives or at least a robust regulatory framework before it can be competently and ethically implemented as a solution in mental healthcare.

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *