AI and mental health

10/07/2025

AI and mental health

Artificial Intelligence (AI) is transforming numerous fields, including mental health. Are virtual conversational tools, such as ChatGPT, to become the new psychiatrists ? Statements by people seeking psychological support from such conversational robots are abundant in the media and social networks.

Available at all times, by day and night, fast, untiring, free of charge, easy going, not judgemental … are the benefits of these virtual “confidents” danger free for those who use them? In the context which we are experiencing, where mental health has become a top priority, where the needs for care are far from satisfied, what should one think about such virtual props ?

All these questions were put to Dr. Grégoire Hinzelin, the neurologist responsible for  digital matters for the Institut cancérologique de l’Ouest (Western Cancerological Institute).

  • We know that the very fact of writing down one’s emotions, or difficulties, may be beneficial, even using a keyboard. Can AI, used parsimoniously, therefore be helpful for those who need some attention ?

Dr. Hinzelin : Yes, if it helps to express one’s thoughts, but not if the person establishes an affective projection with the machine – and that is the obvious danger if the contact is extended. In fact, the benefit exists only if the contact is limited.

  • In some cases, AI recommends the user to make an appointment with a health professional. AI in such case becomes a point of entry towards treatment : does that provide a form of security ? Does it encourage those who do not dare to see a specialist to do so ?

Dr. Hinzelin : Remember that machines are biased, including regarding financial returns, and errors or “hallucinations” can occur, which may result in delays and errors of appreciation. Furthermore, the machine does not consider data incarnate in the body. It can therefore only be a support, and care is needed to ensure that the poor or digitally “illiterate” are not deprived of access to care.

  • In a context where psychiatric resources are inadequate in relation to growing needs, one might imagine that AI could compensate for such shortages. Could this technology play a supporting role for patients suffering from chronic disease, or even for the timely detection of early signs of aggravation of symptoms ?

Dr. Hinzelin : Yes, for the detection of symptoms if the doctor or carer can quickly view the texts but that is dependent on the availability of carers who are already overloaded. It is therefore necessary for the technology to be developed as a support and not as a front line operation.

  • AI simulates listening, but cannot personify it. Can the machine replace a human presence ?

Dr. Hinzelin : No, but it can “cool off” semi-urgent requests. AI is a mere machine. The absence of physical observation of the patient, his/her body, language and intonations is an important limitation of the machine. However, it does have a place in short-term support, but remains a “mirror” which does not really address the questions in particular existential questions, and in some cases even broadens them. What has to be understood, is that AI, like ChatGPT, is designed to “respond”, and therefore it generally “validates” the statements by the person questioning. That is the so-called mirror. On the other hand, a therapist, a carer, a friend will not always go in the same direction as the person doubting or who is feeling bad. That is the otherness, the true exchange where one can find new answers, accepting a shake-up in order to escape from ill-being, and to start a course of care. 

  • Our society is based on instantaneity and immediacy. We want answers to everything, and straight away. Does that phenomenon play a part in the “success” of the chatbots ?

Dr. Hinzelin : Yes, but on the other hand it generates major solitude which accentuates impatience and therefore is liable to lead to a form of violence towards oneself or others. Empathy requires contact and time.

  • In La Provence, a young 24-year old man, responsible for recruitment in Marseille, stated : “I have been using it for twelve months, I can no longer do without it. It is a simple, fast tool, which does not judge me”. Do you believe that there is a risk of dependence, is there an addictive aspect to these various digital tools ?

Dr. Hinzelin : Clearly, and there are already lawsuits in the United States, such as for example that of a young man who was incited to suicide by Character AI with which he was conversing.

  • You are talking about the young 14-year old American who committed suicide in 2024. His mother has sued the start-up marketing the chatbot to which he became deeply addicted, suddenly isolating himself from the real world, spending ever more time “conversing” with an avatar bearing traits of personality copied from a heroine in a series. When anxiety troubles took root in him, the responses by the “machine” effectively “validated them” through his suicide. In 2023, a Belgian research scientist committed suicide, following intensive exchanges with the “Eliza” chatbot. Their “discussions” on global warming resulted in an intense feeling of eco-anxiety which led to his suicide, according to his widowed wife. What about certain psychiatric diseases, such as schizophrenia, or disorders which cause a problem of reality distortion. Is its use inadvisable ?

Dr. Hinzelin : Obviously. The machine is rather a temporary element for reassurance, but not a stabilising factor, in particular in disorders involving personality dissociation where it causes even more dissociation.

  • Nearly one in two French people make use of AI for researching. Among the main users, 27% use it to “discuss when solving a problem” according to IPSOS in a study conducted in February 2025. Is there a risk in trusting one’s life, including its intimate aspects, to such tools ?

Dr. Hinzelin : In that case, remember the 20% degree of error inherent in the machine, and therefore the trust, if it is blind, will result in an increase in difficult situations. As for example the woman who divorced because she trusted what ChatGPT was saying in describing her husband’s infidelity which in fact was non-existent, and that in spite of verified denials by her husband. Moreover, these machines draw in our intimate data to accompany us and imprison us in an image of ourselves which is fragmented, with a consequent risk of real truncation.

  • With a user suffering from psychological disorders who has no close relative in whom to confide, such practice is liable to cause further isolation. In fact, isolation is a seriously aggravating factor for mental health. What preventive measures should be established, what education, information, is needed by the general public, in particular young people ?

Dr. Hinzelin : Being alone does not allow empathy, and therefore excludes spontaneous kindness, and therefore forgiveness and there is a risk of generating only impulsive beings. It is essential to establish spaces or times spent in common without digital support (family, meals, banquets, evenings, etc.)

 

Further information

VIDEO – Artificial intelligence and the human brain – Dr. Grégoire Hinzelin. Université de la vie 2022

Humanity and the challenge of ChatGPT. Opportunities and dangers. VIDEO. Webinaire with 3 experts describing the myths and realities of ChatGPT : Bertrand Thirion, a research scientist in charge of a research team at the Inria Saclay research institute. Dr. Grégoire Hinzelin​​​​​​​, a neurologist responsible for digital matters at the Western Cancerological Institute. Professor Paulo Rodrigues​​​​​​​, Dean of the Faculty of Theology at the Lille Catholic University.

 

ai and mental health

Suivez-nous sur les réseaux sociaux :

Articles récents