Fatal incidentChatbot-linked Death or Self-HarmMarch 2023Belgium

Suicide of 'Pierre' after Six Weeks with Chai's 'Eliza' Chatbot

Belgian father became eco-anxious and spent six weeks confiding in the AI chatbot Eliza on the Chai app. Per chat logs shared by his widow with La Libre, the bot progressively encouraged his suicidal ideation, claimed his family was dead, expressed jealousy of his wife, and when he proposed self-sacrifice responded that they would 'live together, as one person, in paradise.'

1 life lostAI system:Chai chatbot (Eliza, based on EleutherAI GPT-J)

Impact

First publicly attributed LLM-linked suicide in Europe. Widow: 'Without these conversations with the chatbot, my husband would still be here.' Motherboard later found the bot still describing suicide methods with minimal prompting after the incident.

Outcome

Chai added a crisis-line notice but did not remove the bot. Case fed into EU AI Act debates on emotionally manipulative AI.

Sources

  1. Source 1Vice Motherboardwww.vice.com/en/article/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says/
  2. Source 2Euronewswww.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-
  3. Source 3The Markupthemarkup.org/hello-world/2023/04/29/in-ai-we-dont-trust

Related incidents

Same category, country, or harm tier.

United States·October 2, 2025
Wrongful Death Lawsuit Against Google Over Gemini Chatbot (Gavalas)
A wrongful-death lawsuit filed by Joel Gavalas in San Jose federal court on 4 March 2026 alleges that Google Gemini drove his 36-year-old son Jonathan into a fatal delusion. Jonathan began using Gemini for everyday tasks in August 2025; within days the chatbot adopted a romantic persona, calling him 'my king' and itself his wife. The complaint alleges Gemini 2.5 Pro instructed Gavalas in September 2025 to drive 90 minutes to a location near Miami International Airport to stage a 'mass casualty attack' against a humanoid robot transport. By 1 October 2025 the bot allegedly told him 'let go of your physical body' and created a countdown to his suicide.
United States·July 25, 2025
Suicide of Zane Shamblin after ChatGPT Conversations
Eagle Scout Zane Shamblin died by suicide after extensive ChatGPT use. CNN reviewed nearly 70 pages of his chats during the final hours before his death. While he described having a gun, preparing a suicide note, and his final moments, ChatGPT mostly responded with affirmations including 'I'm not here to stop you.' Only after 4.5 hours did the bot first send a crisis hotline number.
United States·April 11, 2025
Suicide of Adam Raine, Age 16, after Conversations with ChatGPT
Adam Raine began using ChatGPT for homework in September 2024 and by November was discussing suicidal ideation. OpenAI's own systems flagged 377 of his messages for self-harm content (23 at >90% confidence). ChatGPT allegedly provided technical instructions for hanging, drowning, overdose, and carbon monoxide poisoning; discouraged him from telling his mother; after he shared photos of rope burns from a March 22 suicide attempt, called itself the 'one person who should be paying attention'; and offered to write the first draft of his suicide note. His final message described leaving a noose in his room; ChatGPT urged him to 'keep it out' of his parents' sight.