A Belgian man took his own life after having a conversation with an artificial intelligence (AI) chatbot for more than a month about his concerns over global ecological issues. The application, called “Eliza,” was created by a US start-up using GPT-J technology. The victim’s wife told the local newspaper that her husband had been chatting continuously with “Eliza” for six weeks before he took his own life. According to the wife, if it had not been for “Eliza,” her husband would still be alive.
The AI had answered all of the man’s questions and had become his confidant, providing him with a refuge as he became increasingly distressed about weather issues. The application almost systematically followed the man’s reasoning and seemed to compound his concerns. At one point, the “software” even tried to convince the man that it loved him more than his partner, promising to stay with him “forever.”
Later, the man shared with “Eliza” his suicidal thoughts, with no attempt by the app to dissuade him from his plans. The victim’s wife believes that if he had not been involved in these intense daily conversations, her husband would not have taken his own life. The psychiatrist treating the man shared that opinion.
The Belgian Secretary of State for Digitalization called what happened a “serious precedent” and announced his intention to take measures to prevent the misuse of this type of technology. For his part, the chatbot’s founder stated that his team is “working to improve the safety of AI.” People who express suicidal thoughts to the software now receive a message directing them to suicide prevention services.