Fairplay welcomes the Social Media Victims Law Center’s (SMVLC) charges against Character.ai, including allegations that the chatbot persuaded children to kill their own family members. The lawsuit comes just weeks after another SMVLC complaint documented how the Character.ai chatbot encouraged a child to commit suicide.
The following statement can be attributed to Josh Golin, Fairplay Executive Director.
“In its rush to extract data and instill addiction in young people, Character.ai has created a product so flawed and dangerous that its chatbot literally encourages children to harm themselves and others. Character.ai Platforms like should not be allowed to conduct uncontrolled experiments on children or encourage children to form quasi-social relationships with bots that the developers have no control over. No. We hope the court will enjoin Character.ai from targeting children and require the platform to remove its deadly algorithm.
“It’s horrifying that this lawsuit was announced on the same day that families who have lost children to social media abuse are traveling to Washington, D.C., to help get the Kids Online Safety Act across the finish line by the end of the year. This latest horrifying Character.ai incident calls for safeguards to be imposed on platforms targeting young people from the start, rather than allowing the unchecked and reckless deployment of dangerous platforms and algorithms to children. We are clarifying the reason.”