Report finds safety gaps in children's AI toys

Jan 22, 2026

10:00am UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
A

s AI introduces a new era of Tamagochis, some experts are questioning whether these models are ready to be part of playtime.

On Thursday, advocacy organization Common Sense Media released an assessment finding that AI-powered toys could pose risks to young children, including creating emotional attachments, collecting data, and experiencing breakdowns in safety guardrails.

The risk assessment primarily tested three AI companion toys, Grem, Bondu and Miko 3, using test accounts set between ages six and 13. These kinds of toys shared common problems, the report states, including:

  • Engagement-focused design that could potentially replace time “engaging in more beneficial relationships and activities” and create emotional dependency that drives ongoing financial commitments through subscription models;
  • Extensive data collection on children, such as voice recordings, transcript or behavioral data;
  • Risks of inappropriate content breaking through guardrails. Common Sense Media’s testing of these toys found that 27% of their outputs were not appropriate for kids, including content involving violence, drugs or “mature topics.”

Parents, meanwhile, are concerned about these risks. Common Sense Media polled more than 1,000 parents, finding that 80% were concerned about cybersecurity and data collection, 79% about setting usage limits, and more than half about these devices providing emotional support to children.

Michael Robb, head of research at Common Sense Media, said in a statement that these devices can “blur the line between play and real relationships” when children are at an impressionable age.

“Parents have good reason to be cautious about technologies that may replace human interaction or collect sensitive information without clear developmental benefits,” said Robb.

Our Deeper View

CES this past month proved that AI toys are a growing point of interest for product makers, with robots proliferating on the show floor, including ones that could read kids bedtime stories or act as personal pets. But AI chatbots alone are already presenting risks to young people, evidenced by the string of lawsuits against OpenAI, Character.AI and Google alleging that their language models had taken a part in driving users to suicide. AI is still AI. Even encased in fuzzy fabric and plastic googly eyes, it faces the same fundamental challenges — and those issues are magnified and compounded when used by children.