More Adults Are Turning to Chatbots for Mental Health Support… And Why You Should Not

robots on laptops

Chatbots were originally designed to answer simple queries. They helped students revise, assisted customers with order tracking, and handled basic troubleshooting. Today, however, conversations with large language models such as ChatGPT are becoming far more personal.

Recent surveys suggest that close to half of US adults have used LLMs for mental health support. Figures are similar all over the world. 

The reasons are easy to understand. These tools are available instantly, at any time of day. No matter when someone needs to talk, there is no waiting list, no referral process, and more importantly, no cost barrier. Additionally, LLMs respond immediately.

For some, it feels safer than speaking to a therapist. There is no fear of judgement or further consequences. 

But while LLMs can be a useful sounding board, there are important risks and limitations that adults need to consider before relying on them as a substitute for professional care.

LLMs Can Provide Incorrect Information

Large language models generate responses based on patterns in data. They do not “know” information in the way a trained clinician does. 

Studies evaluating AI medical responses have found that while many answers are broadly accurate, a significant percentage contain factual inaccuracies, incomplete advice, or misleading nuance.

Because responses are delivered confidently, it can be difficult for users to recognise when information is flawed. In mental health contexts, nuance matters. Oversimplified explanations of anxiety, depression, or neurodevelopmental conditions can reinforce misunderstandings rather than clarify them.

LLMs Cannot Provide a Proper Diagnosis

AI tools can describe symptoms. They can suggest possibilities. But they are not equipped to conduct structured clinical assessments, evaluate medical history in depth, or rule out overlapping conditions.

For example, difficulties with focus, emotional regulation, or fatigue may point towards several different explanations. Only a qualified professional can properly assess these in context. If someone suspects ADHD in adulthood, seeking a private adult ADHD assessment ensures a structured evaluation rather than a speculative conversation.

The bottom line: Diagnosis requires training, clinical judgement, and professional experience. 

LLMs Can Respond Inappropriately in High-Risk Situations

One of the most concerning limitations involves crisis responses. Research has shown that LLM systems can fail to reliably recognise suicidal intent or may generate responses that are insufficiently supportive. 

While developers are improving safety filters, no LLM tool can fully replace the judgement of a trained mental health professional.

There is also a risk of reinforcing stigma. If an LLM model is trained on biased data, it may inadvertently reinforce harmful narratives about certain conditions or populations.

In situations involving severe distress, self-harm, or risk to others, human intervention is essential.

LLMs Don’t Provide Treatment

Perhaps the clearest distinction is this: AI can offer information and reflection. It cannot provide treatment.

Mental health specialists offer structured therapy approaches grounded in evidence. They track progress over time. They adapt interventions based on your responses. They are accountable for safeguarding your well-being.

Treatment may include psychological therapies, medication where appropriate, lifestyle adjustments, or coordinated care with other professionals. None of this can be replicated by a chatbot, regardless of how empathetic its wording appears.

It is understandable that adults are turning to LLMs. But when it comes to mental health, professional care remains irreplaceable. Technology can’t substitute expertise, responsibility, or human judgement.

Photo by Mohamed Nohassi on Unsplash