ArtificiaI intelligence faces the challenge of suicide prevention
Artificial intelligence must incorporate some basic ethical principles. Suicide prevention is one of them. The prevention of suicide cannot suffer any exceptions. On the occasion of the world day for suicide prevention, this flagship message must remain the founding principle for public health policies.
Suicide prevention, excludes de facto the legalisation of assisted suicide. How otherwise could one claim that any suicide is a tragedy which must be avoided – the message at the heart of prevention – and at the same time accept situations where it is possible, or even desirable?
According to the WHO, suicide represents one percent of causes of death, and it remains one of the main causes of death among the young.
The policy for prevention also involves communication, and the means by which suicide messages are disseminated.
For several years the WHO has published guidelines aimed at the media. Recalling the existence of the Werther effect, the WHO recalls that there is “evidence that suicide reporting in the media can reinforce or diminish suicide prevention efforts. Widely broadcast cases of suicide deaths are often followed by an increase in the number of suicides in the population, whereas accounts of people overcoming a suicide crisis can lead to a reduction in the number of suicides. Professionals in the media are encouraged to concentrate on the presentation of accounts of people overcoming difficulties after a suicide crisis, whilst obeying the rules and prohibitions applicable to that resource when reporting on suicide.”
Artificial intelligence and suicide prevention : everything remains to be done
However, the so-called “traditional” media are no longer the sole source of information for the public. In two decades, the landscape has been completely overturned by the internet, the multiplication of social networks, and over the last few years, the general public availability of “Generative AI”, including the famous ChatGPT.
In a study conducted at the request of ARCOM and published in 2024, search engines and the social networks are challenging radio among the sources quoted in surveys: 49% and 47% of those surveyed are informed through those channels, radio achieving a level of 51%. Although television remains the prime source, quoted at 66%, a survey conducted by the Médiamétrie company, published in 2023 noted a reduction in the average daily viewing time.
As the new information channel, the share achieved by artificial intelligence already seems considerable. Although the figures are difficult to verify, some sources assume that between 25 and 30% of French people have interacted with ChatGPT in 2025. For the younger generations, the proportion is much higher, at around 70%.
In that context, a recent tragedy in the United States highlighted serious deficiencies in the system made available by the OpenAI company. A 16-year-old youth, Adam Raine, living in California, hanged himself. For several months he had been conducting conversations with generative artificial intelligence, without it dissuading him from his dark ideas or his suicide intention. According to the media, Adam Raine would have exchanged with ChatGPT on the techniques for hanging, and also confided his despair. AI had apparently never produced any response for dissuading suicide intentions and encouraging him to speak to his parents. The latter have since launched a lawsuit against OpenAI. The company has announced it is taking measures in favour of parental control.
Nevertheless, the absence of identity checking – a recurrent and central subject on the digital networks – throws doubt on the true effectiveness of such a measure.
Can standard responses aimed at preventing suicide be programmable?
Another basic subject cannot be overlooked. The principle of generative AI is based on learning as and when new users, and therefore new cases, join the platform and converse with the AI agent. On a few vital subjects, such as suicide prevention, could one demand that the companies which produce conversational agents programme them by default to standard responses aimed at preventing any suicide attempts? Can the learning logic by experience and the assertion of a few intangible principles be reconcilable?
Especially as there are multiple dramatic scenarios where artificial intelligence could have played a role.
In the autumn of 2024, another tragic case of conversations which ended badly has been revealed in the press. An adolescent, Sewell Seltzer, ended his days following a conversation with Dany, a “chatbot” (conversational robot) by the Character.ai company. He had fallen in love with the robot and was under the illusion he could join it by committing suicide.
More recently, a man murdered his mother before killing himself in Connecticut. According to the press, conversations with ChatGPT maintained and reinforced his misplaced beliefs that his mother was resentful of his life.
If artificial intelligence is to be the “New frontier” for the 21st century, it is becoming urgent to regulate its access, its learning mode, its ability to incorporate a few basic ethical principles. The prevention of all suicides is one.