ore AI firms are cracking down on younger users.
Character.AI announced last week that it would remove the ability for underage users to have “open-ended chat” on its platform by November 25. The company will start by limiting use to two hours per day for under-18 users, and ramp down in the coming weeks. The company will also roll out “age assurance” functionality and open a nonprofit AI safety lab dedicated to safety alignment on future AI features.
Character.AI is the latest company seeking to limit how young users engage with its models.
- OpenAI added parental controls in late September and is working on rolling out an age prediction mechanism this fall.
- Meta followed suit in mid-October, allowing parents to turn off their kids’ access to one-on-one chats with AI characters. This followed Reuters reporting that Meta’s internal policies allowed its chatbots to have “sensual” conversations with young users.
These controls come amid increased legislation on AI companionship, such as in California and New York.
And as more young users turn to AI for emotional support and companionship – with nearly one fifth of teens engaging in or knowing someone who has engaged in romantic relationships with AI – these controls are more important than ever, Brenda Leong, director of the AI division at law firm ZwillGen, told The Deep View.
“(Young people) have less life experience, less internal defenses, less ability to make those distinctions for themselves,” said Leong. “There's clear risk here that would justify trying to focus protections and controls specifically for minors.”
But minors aren’t the only ones susceptible to emotional attachment to AI models. As these models learn to placate users, no one is immune to the “emotional overlay” that these models provide, said Leong. But that emotional connection can come with significant risks.
Data from OpenAI published last week found that 0.15% of conversations a week on the platform include explicit indicators of suicidal planning or intent. With more than 800 million active weekly users, that equals roughly one million users per week.
“We're slaves to our own psychology. We learn to make all kinds of judgments – split second, long term, contextual – based on anthropomorphizing everything in our world,” said Leong. “There is danger, because we don't have good defenses. We don't have good barriers.”
Our Deeper View
While protections related to underage users are easier to exercise, applying those protections to adult users can get murky. Interacting with these models in emotional ways can provide “dopamine hits,” Leong noted, which can become almost addictive. While it’s easy (in theory) to regulate the substances that young people shouldn’t partake in – alcohol, cigarettes and gambling, for example – adults generally have free rein. AI use is quite the same, even if it’s being utilized to a user’s own detriment.




