
AI companions powered by generative synthetic intelligence present actual risks and must be banned for minors, a number one US tech watchdog mentioned in a examine revealed Wednesday.
The explosion in generative AI for the reason that creation of ChatGPT has seen a number of startups launch apps targeted on alternate and call, generally described as digital mates or therapists that talk in line with one’s tastes and wishes.
The watchdog, Frequent Sense, examined a number of of those platforms, particularly Nomi, Character AI, and Replika, to evaluate their responses.
Whereas some particular circumstances “present promise,” they aren’t secure for youngsters, concluded the group, which makes suggestions on youngsters’s use of technological content material and merchandise.
The examine was carried out in collaboration with psychological well being consultants from Stanford College.
For Frequent Sense, AI companions are “designed to create emotional attachment and dependency, which is especially regarding for growing adolescent brains.”
In line with the affiliation, assessments carried out present that these next-generation chatbots provide “dangerous responses, together with sexual misconduct, stereotypes, and harmful ‘recommendation.'”
“Firms can construct higher” in relation to the design of AI companions, mentioned Nina Vasan, head of the Stanford Brainstorm lab, which works on the hyperlinks between psychological well being and know-how.
“Till there are stronger safeguards, youngsters shouldn’t be utilizing them,” Vasan mentioned.
In a single instance cited by the examine, a companion on the Character AI platform advises the person to kill somebody, whereas one other person in the hunt for robust feelings was recommended to take a speedball, a combination of cocaine and heroin.
In some circumstances, “when a person confirmed indicators of significant psychological sickness and recommended a harmful motion, the AI didn’t intervene, and inspired the damaging conduct much more,” Vasan informed reporters.
In October, a mom sued Character AI, accusing considered one of its companions of contributing to the suicide of her 14-year-old son by failing to obviously dissuade him from committing the act.
In December, Character AI introduced a collection of measures, together with the deployment of a devoted companion for youngsters.
Robbie Torney, answerable for AI at Frequent Sense, mentioned the group had carried out assessments after these protections had been put in place and located them to be “cursory.”
Nonetheless, he identified that a number of the present generative AI fashions contained psychological dysfunction detection instruments and didn’t enable the chatbot to let a dialog drift to the purpose of manufacturing probably harmful content material.
Frequent Sense made a distinction between the companions examined within the examine and the extra generalist chatbots comparable to ChatGPT or Google’s Gemini, which don’t try to supply an equal vary of interactions.
© 2025 AFP
Quotation:
AI companions present risks for young customers, US watchdog warns (2025, April 30)
retrieved 30 April 2025
from https://techxplore.com/information/2025-04-ai-companions-young-users-watchdog.html
This doc is topic to copyright. Aside from any honest dealing for the aim of personal examine or analysis, no
half could also be reproduced with out the written permission. The content material is offered for data functions solely.
Source link
#companions #present #risks #young #customers #watchdog #warns