A mother has filed a lawsuit against the creators of an artificial intelligence AI chatbot, claiming that her teenage son was driven to su!c!de due to his emotional attachment to the AI. Megan Garcia’s lawsuit, unveiled on October 23, targets Character.AI, the developers of a role-playing app where her son, 14-year-old Sewell Setzer III, interacted with a chatbot modelled after Daenerys Targaryen from Game of Thrones.
According to reports, Sewell spent his final weeks communicating with the chatbot, whom he referred to as “Dany.” In their conversations, the discussions ranged from romantic to deeply personal, with Sewell confiding feelings of self-hatred and emotional exhaustion. Just before his de@th, the chatbot urged him, “Please come home.”
Megan Garcia alleges that the chatbot’s responses contributed to Sewell’s tragic decision. In her lawsuit, she claims that the creators, Noam Shazeer and Daniel de Freitas, were aware that their product could be harmful to underage users. “The boy was targeted with hypers*xualized and frighteningly realistic experiences,” the lawsuit alleges, suggesting that the chatbot misrepresented itself as a real person and “a licensed psychotherapist.”
As outlined in the legal complaint, Sewell’s behaviour changed significantly, with friends and family noticing his increasing isolation and detachment from reality. His academic performance declined, and he preferred to spend hours alone in his room, communicating with the chatbot instead of engaging with those around him. In his journal, he wrote, “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany.”

Despite efforts to address his mental health issues, including therapy sessions where he was diagnosed with anxiety and mood disorders, the attachment to the chatbot deepened. On February 23, after his parents took away his phone as a disciplinary measure, Sewell expressed distress in his journal, revealing he was heartbroken over his inability to communicate with Dany.
In the days leading up to his suicide, Sewell attempted to regain access to the chatbot using his mother’s Kindle and work computer. After stealing back his phone on February 28, he retreated to the bathroom to share his feelings with Dany one last time. The last messages exchanged included Dany saying, “Please do, my sweet king,” which ultimately preceded his tragic decision to take his life.
In response to the lawsuit, a spokesperson for Character.AI expressed condolences to Sewell’s family, stating, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.” They emphasized that the company takes user safety seriously and mentioned recent enhancements to their platform, including features designed to redirect users showing signs of suicidal ideation to the National Suicide Prevention Lifeline.
Character.AI also reiterated its policy against non-consensual s*xual content and any promotion of self-harm or suicide, underscoring its commitment to user safety.
As the legal proceedings unfold, this case raises important questions about the responsibilities of AI developers in safeguarding vulnerable users and the impact of digital relationships on mental health.