Can ChatGPT age checks keep kids safe?

By
Nat Rubio-Licht

Jan 20, 2026

10:07pm UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
A

t long last, OpenAI is rolling out its way to judge whether or not you are smarter than a fifth grader.

On Tuesday, the company announced the launch of its age prediction systems on ChatGPT consumer plans. This tech will help determine whether an account belongs to someone under 18, enabling proper safeguards for those users. The tech will be rolled out in the EU in the coming weeks to account for regional laws.

While teens who identify themselves as underage will automatically have safeguards in place, OpenAI’s model uses a combination of behavioural and account signals, such as how long an account has existed, typical usage times of day, activity patterns over time, and how old the user claims to be.

Accounts that are detected as being minors will have limited exposure to sensitive content, such as violence and sexual content. Users who are misidentified as being under 18 can restore full access by taking a selfie through “Persona,” a service that verifies identity.

“When we are not confident about someone’s age or have incomplete information, we default to a safer experience,” the company said in its blog post.

The company first announced this age prediction system in fall 2025 as part of a broader update to its policies regarding the safety of young users. The debut marks the latest in a string of initiatives aimed at minors:

  • In late September, the company announced Parental Controls for teen accounts on ChatGPT and Sora, which allow parents to link their accounts to their teens and customize the types of content their teens can generate. These controls allow parents to set quiet hours, turn off memory and opt out of model training.
  • And in early January, the company announced a partnership with advocacy organization Common Sense Media to support ballot initiatives that require companies to implement “age assurance technology” to protect young users.

OpenAI’s initiatives also follow months of turmoil and backlash for the risks that both its flagship chatbot and others pose to minors. OpenAI Alone is facing multiple lawsuits alleging that ChatGPT is culpable for emotional manipulation and deaths of multiple people by suicide. Google and Character.AI, meanwhile, settled a lawsuit earlier this month with similar allegations.

In an X post responding to Elon Musk regarding ChatGPT being linked to deaths, OpenAI CEO Sam Altman said that almost one billion people use ChatGPT, some of whom are in “very fragile mental states.”

“It is genuinely hard; we need to protect vulnerable users, while also making sure our guardrails still allow all of our users to benefit from our tools,” Altman said.

Our Deeper View

While tech like this is a step in the right direction (and serves as damage control), the bigger question is whether it actually works. Today's AI is still far from perfect, with a tendency to hallucinate and make mistakes. And we’ve already seen that wreak havoc in age verification systems such as Roblox, which misidentified kids as adults and vice versa. The question remains whether these kinds of risks can be controlled at all while allowing young users to harness the technology, and what responsibility these firms have in safeguarding the systems.