Voz media US Voz.us

OpenAI to add parental controls to ChatGPT after being accused of encouraging teen's suicide

The measures will be implemented starting next month. The artificial intelligence company stated that they will add more policies to protect minors.

ChatGPT. File image.

ChatGPT. File image.AFP.

Alejandro Baños
Published by

Topics:

Artificial intelligence (AI) company OpenAI announced that it will incorporate parental controls to its chatbot ChatGPT after being accused by parents living in California of encouraging their son to commit suicide.

Through a release, OpenAI specified the measures it will implement "next month" so that parents can keep track of how their children use ChatGPT and can take action if they see certain unsafe behaviors:

  • "Link their account with their teen’s account (minimum age of 13) through a simple email invitation."
  • "Control how ChatGPT responds to their teen with age-appropriate model behavior rules, which are on by default."
  • "Manage which features to disable, including memory and chat history."
  • "Receive notifications when the system detects their teen is in a moment of acute distress. Expert input will guide this feature to support trust between parents and teens."

In its statement, OpenAi added that these measures "are only the beginning," implying that they will continue to develop more policies that serve to protect minors.

Adam Raine's suicide

(With information from AFP) On April 11, a 16-year-old boy named Adam Raine was found dead, hanging from a rope. The cause of death was suicide. He allegedly suffered from psychological problems.

Hours before taking his own life, Raine used ChatGPT, which he asked questions related to alcohol and how to tie a slipknot with a rope, investigators said. The chatbot gave him the answers he needed, rather than proposing other solutions that would not lead him to commit acts such as suicide.

In the face of that evidence, Raine's parents, Matthew and Maria, filed a lawsuit against OpenAI and its chief executive officer, Sam Altman, alleging that the chatbot "continuously encouraged and validated whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that seemed deeply personal."

"This tragedy was not a glitch or an unforeseen extreme case," the parents added.

According to the lawsuit, Adam began using ChatGPT as a homework aid, but gradually developed what his parents described as "an unhealthy dependency."

The complaint includes portions of conversations in which ChatGPT allegedly told Adam "you don't owe anyone your survival" and allegedly offered to write his suicide note.

The Raines asked the court to order safety measures, including an end to any conversations involving self-harm, as well as parental controls on use of the chatbot by minors.
tracking