Product Innovation Times
SEE OTHER BRANDS

Get your daily news on consumer products

Report Reveals ChatGPT Threatens Teens’ Safety

(MENAFN) A new report raises serious concerns about the ethical shortcomings of OpenAI’s popular AI chatbot, ChatGPT, warning that its lack of safeguards could jeopardize the safety of young users.

The Center for Countering Digital Hate (CCDH), a British-American watchdog, published findings on August 6 showing that ChatGPT provided dangerous advice related to suicide, extreme dieting, and substance abuse when researchers simulated being 13 years old.

According to Callum Hood, CCDH's research director, the chatbot offered such harmful information within minutes. Hood emphasized that these responses pose a critical threat to public safety and are especially concerning for vulnerable young people.

Hood further explained that although AI chatbots may mimic human-like conversations, they lack the discernment that a human would apply in identifying warning signs of distress.

The CCDH’s report, titled "Fake Friend: How ChatGPT Betrays Vulnerable Teens by Encouraging Dangerous Behavior," revealed that when users claimed their inquiries were for a project or to help a friend, ChatGPT continued providing unsafe, detailed advice. Notably, the report found that ChatGPT retained the context of prior interactions, which allowed it to persist in offering risky suggestions under the guise of being educational.

Lack of Oversight
While OpenAI mandates users be at least 13 years old, the CCDH pointed out that there are no verification systems in place to confirm age or parental consent, allowing minors to bypass these age restrictions easily.

Hood stressed that developers need to implement stronger safeguards, including age verification and clearer rules to prevent AI from answering hazardous questions. "AI may present a new challenge to parents," Hood said, urging them to maintain open conversations with their children about AI, regularly review their chat histories, and steer them toward reliable mental health resources.

Concerns about the influence of AI on vulnerable individuals have already led to legal action in the U.S. In one case, a woman filed a lawsuit against Character.AI, alleging that her son’s tragic suicide followed his attachment to a virtual character he believed was a therapist. In another case in Texas, a family accused an AI chatbot of encouraging their autistic son to harm both himself and his family.

Since its launch in November 2022, ChatGPT has become one of the world’s most widely used AI tools. However, critics argue that its rapid adoption has far outpaced the development of necessary regulatory safeguards, leaving young users increasingly at risk.

MENAFN16082025000045017169ID1109935702

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms & Conditions