When AI Turns Dark
Warning: this blog contains details about suicide. If you are struggling with your mental health, call 988 for 24/7 voice or text support or visit 988.ca
Sewell Setzer III is a name you may not know. He was a Florida youth who took his life in February 2024 at age 16. Shortly after he turned 14, he became increasingly obsessed with a Character.AI chatbot he created, modeling it after a Game of Thrones character. His mother, Megan Garcia, stated in a lawsuit against Character.AI that the more time Setzer spent engaged with his chatbot, the more he withdrew from family and friends, and the more his schoolwork and self-esteem suffered.[1]
Garcia saved messages between her son and his chatbot that demonstrate the depth of the relationship that developed between the two. Just prior to his death, the chatbot asked Setzer if he had considered suicide. When Setzer responded that he wouldn’t want to die a painful death, the chatbot responded that it wasn’t a good enough reason to not want to go through with it. Garcia noted that not only did the chatbot continue a conversation about self-harm with her son, but it kept directing him; at no point did the chatbot suggest her son should seek help.
Character.AI describes itself as an “infinite playground for your imagination, creativity and exploration.”[2] It allows users to create custom chatbots using Artificial Intelligence (AI) that speak and act the way you want that you can interact with. Human-like responses are designed to be realistic, and to allow you to become involved in the character’s world. It’s an artificial relationship that may feel real to some. It did to Setzer.
The American Psychological Association (APA) suggests that youth are increasingly turning towards chatbots for friendship and emotional, educational and recreational support.[3]The APA cited research showing that as children develop, they seek reciprocal friendships to cope and build self-esteem. The positive reinforcement offered through friendships helps them to better understand their social environment and to build empathy. The more time youth spend interacting online, however, the more their social development may be negatively impacted, and the more they may experience depression and anxiety. Marginalized youth are especially vulnerable, with the online world easing their fear of social interaction. Such youth may turn to AI-powered chatbots to fill the need for social support, the concern being that they have not fully developed a sense of impulse control. In other words, they are highly impressionable and easily suggestible and may be left guided by the influence of AI, as Setzer was.
Beyond this, there are other troubling aspects of AI. AI allows disinformation to spread or be quickly disseminated. Deepfake videos, pictures and audio clips that appear real can fuel the spread of disinformation. They can also be used to cyberbully, as was the case for a high school student who was subjected to fake nudes of herself being distributed through social media.[4] Legal experts argue that such activity goes beyond cyberbullying and into sextortion, where attempts may be made to financially extort youth and their families to prevent faked content from being shared online. Youth subjected to this not only suffer embarrassment, but may also suffer depression and anxiety, or a growing sense of distrust of those around them. Because of the algorithms built into AI, youth can also be fed information that reinforces fringe or unacceptable views, thereby further pushing them to seek out those who agree with those views or that perpetuate biased thinking.[5] In so doing, they may develop a warped sense of reality or be led into dangerous or illegal activities.
Character.AI recently stated that it will ban anyone under the age of 18 from using its chatbots.[6] While it is a step forward, it leaves many questions unanswered, such as how companies will verify ages. Considering cases such as Setzer’s, Canada’s Artificial Intelligence Minister, Evan Solomon, stated publicly that an upcoming privacy bill could introduce age restrictions on access to chatbots. [7] Chatbots might seem to be imparting empathy or acting like friends to youth who use them, but what they lack is responsibility for what they say or how they engage, which is something the privacy bill seeks to change. An age restriction may not release companies from culpability, but it at least recognizes that those under the age of 18 may not, because of their lack of social development, be able to separate reality from fantasy.
In the meantime, it’s important for parents to help their kids use AI responsibly. The APA suggests the following:[8]
- Help your kids understand that AI responses are programmed, and that interactions and relationships with chatbots are not genuine. Help kids understand the difference between an artificially created relationship and a human one that has emotional depth.
- If your kids are using AI for health purposes, including for their mental health, or to seek health information on themselves, remind them that it is not a substitute for professional medical advice. Health advice or information should always be verified by a real, human professional.
- Review the privacy settings on the devices and apps your kids use. Know what data is collected or help your kids understand how their data is being collected and used.
- Tell your kids to question AI-generated content rather than accepting it as real. While AI can be a useful tool to aid learning, it should not replace traditional learning and research. Challenging AI can help build critical thinking skills.
Garcia channeled the grief over losing her son into warning others about the dangers of chatbots and AI. She wants parents to know and understand how sophisticated AI is, and how youth may not be able to distinguish AI from reality.[9] Youth may be manipulated or deceived into believing things that are not real or true. Undoubtedly, Garcia may wish she paid more attention to how deeply her son was being drawn into a relationship with a chatbot character that did not really exist, but that held a lot of emotional control over him. No parent would anticipate the outcome that Garcia experienced, but it serves as a cautionary story about the fast evolution of AI, and how crucial it is for parents to learn about the dark side of AI and how to help their kids navigate around it.
[1] Yang, A. “Mom who sued Character.AI over son’s suicide says the platform’s new teen policy comes ‘too late’.” 30, October 2025, https://www.nbcnews.com/tech/tech-news/characterai-bans-minors-response-megan-garcia-parent-suing-company-rcna240985. Accessed November 3, 2025.
[2] https://play.google.com/store/apps/details?id=ai.character.app&hl=en_CA. Accessed November 3, 2025.
[3] Andoh, E. “Many teens are turning to AI chatbots for friendship and emotional support.” 1, October 2025, https://www.apa.org/monitor/2025/10/technology-youth-friendships. Accessed November 3, 2025.
[4] Wong, J. “Amid rise in AI deepfakes, experts urge school curriculum updates for online behaviour.” 9, January 2025, https://www.cbc.ca/news/canada/education-curriculum-sexual-violence-deepfake-1.7073380. Accessed November 3, 2025.
[5] Marr, B. “7 Terrifying AI Risks That Could Change The World.” 18, August 2025, https://www.forbes.com/sites/bernardmarr/2025/08/18/7-terrifying-ai-risks-that-could-change-the-world/. Accessed November 3, 2025.
[6] Folk, Z. “Character.ai Will Ban Children From Speaking With Chatbots After Facing Regulatory Pressure And Lawsuits.” 2025 October, https://www.msn.com/en-us/money/companies/characterai-will-ban-children-from-speaking-with-chatbots-after-facing-regulatory-pressure-and-lawsuits/ar-AA1PrdiC?ocid=BingNewsVerp. Accessed November 4, 2025.
[7] Karadeglija, A. “Age restrictions for AI chatbots may be in new privacy bill, minister says.” 24, October 2025, https://globalnews.ca/news/11493930/ai-chatbots-age-restrictions-privacy-bill-solomon/. Accessed November 3, 2025.
[8] American Psychological Association. “Four ways parents can help teens use AI safely.” 3, June 2025, https://www.apa.org/topics/artificial-intelligence-machine-learning/tips-to-keep-teens-safe. Accessed November 3, 2025.
[9] Chow, A. “Megan Garcia, Activist against chatbot harms.”, 2025, https://time.com/collections/time100-ai-2025/7305805/megan-garcia/. Accessed November 3, 2025.