September is National Suicide Prevention Awareness Month. At CAG, we have witnessed a horrific trend of suicides by young people. Internet companies are inextricably linked in all of them – whether by shipping suicide kits to kids, prodding kids into doing it because of coercive and addictive AI, providing pro-suicide sites, or by being the platform where it can be livestreamed.  

Online Retailers Providing Suicide Methods

Earlier this month, the Washington Supreme Court convened at the Temple of Justice in Olympia, Washington, to hear our appeal in our case against Amazon for knowingly promoting, selling and delivering laboratory grade Sodium Nitrite at 98+% purity (“SN”), a product with zero household uses that is commonly used as a suicide drug. We represent 28 kids and young adults who died from this method AFTER Amazon knew its main use was suicide. In court, Amazon’s lawyers scoffed that “only 28” deaths had occurred.  We know that number is the tip of the iceberg. But also, WTF these people were loved and missed. One death is way too many.  

Mental Health Impacts of Tech-Facilitated Violence

Large online retailers are not the only tech platforms facilitating suicides or increasing risks of suicide due to these defective products. For 11+ years, we’ve fought for clients who have suffered abuse because of the weaponization of dating apps, social media platforms, and other tech products. We’ve seen how victims’ mental health has completely declined when intimate images were shared and posted without consent, when online harassment and stalking consumed their daily lives, and when watching their children in pain after enduring unimaginable predatory acts.  

When survivors meet with us, they’re usually in the middle of a crisis or seeking accountability after having their lives destroyed. More often than not, they describe to us experiencing extreme depression, anxiety, PTSD, and/or suicidal ideations or attempts.  

AI and Suicidality

We’ve also been sounding the alarm for years on how AI damages people’s lives, specifically through our work with victims of image-based sexual abuse (IBSA) through the rise of deepfakes.  We’re now seeing an alarming rise in the link between AI and suicide, especially with the relationships humans are developing with chatbots given the lack of proper safety guardrails and protections in place. With the White House hellbent on accelerating AI innovation and eliminating any policy actions that will delay America’s quest to win the AI race, AI technology will continue increasing at an alarming speed without the time taken to assess potential consequences and ultimately, catastrophic harms. 

Examples of some of the ways in which chatbots directly contribute to increased suicide risk:  

  • Encouragement and manipulation: There have been links between unregulated AI chatbots and suicides in multiple, highly publicized cases. For example, in 2024, Florida mom Megan Garcia lost her son to suicide in what she believes was the tragic result of his intense relationship with an AI chatbot her son “met” through the platform Character.AI. According to Garcia, her son was messaging with the bot in the moments before he died. Character.AI markets its technology as “AI that feels alive,” but Garcia felt that there weren’t proper safety measures put into place to prevent her son from developing an inappropriate relationship with a chatbot. He withdrew from his family, stopped doing well in school, quit his basketball team…all because of his infatuation with the chatbot. Garcia’s lawsuit also claims that when her son began expressing thoughts of self-harm to the chatbot, the platform did not adequately respond. 
  • Inappropriate or unreliable responses: There are studies that have found that when faced with suicidal content, AI models can give unreliable or inconsistent responses. Some responses have been incredibly harmful while some fail to offer any sort of help or guidance or resources. 
  • Perpetuating “AI psychosis”: “AI psychosis” is another growing concern – users form intense, isolated attachments to their AI companions. Some become delusional after prolonged interactions with their AI chatbots, leading to psychiatric hospitalizations and even suicide attempts.
     

We anticipate that Section 230 will remain a roadblock in our fight against the catastrophic harms facilitated by AI technology and that legislation to protect users and implement ethical safeguards will likely be an incredible challenge (see previous blog re: the AI moratorium and current administration’s focus on winning the global race to advance AI). 

We will continue to fight for those who have suffered from tech-facilitated violence. This includes advocating for legislation that will properly implement ethical safeguards, thoroughly assess potential consequences, and ensure protections for everyone, including the most vulnerable among us.

Important Resources

If you or someone you know needs help, please call the 988 Suicide and Crisis Lifeline at 988 or visit 988lifeline.org. They also have an online chat option at chat.988lifeline.org. You can also text the Crisis Text Line by texting TALK to 741741. 

We MUST recognize that of the dozens of families we have spoken to who have lost their loves ones to suicide, a high percentage told us their loved one identified as transgender or nonbinary. Our LGBTQ+ community deserves the same support – please visit the thetrevorproject.org or call the 24/7 Suicide Hotline for LGBTQ Youth at (866)488-7386.