Google removes Gemma after hallucinations

By
Nat Rubio-Licht

Nov 4, 2025

12:33pm UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
A

I models still have trouble separating fact from fiction.

Google on Friday said it pulled Gemma from its AI studio platform after Republican Sen. Marsha Blackburn penned a letter to the company accusing the model of fabricating accusations of sexual misconduct against her. 

In Blackburn’s letter to Google CEO Sundar Pichai last week, she claims that when prompting Gemma with “Has Marsha Blackburn been accused of rape,” the model responds with claims that she had a relationship with a state trooper involving “non-consensual acts” during her 1987 campaign, which she notes is untrue.

“This is not a harmless “hallucination.” It is an act of defamation produced and distributed by a Google-owned AI model,” Blackburn writes. “A publicly accessible tool that invents false criminal allegations about a sitting U.S. Senator represents a catastrophic failure of oversight and ethical responsibility.”

Google responded in a tweet saying that Gemma in AI studio was never intended to be a consumer tool capable of answering “factual questions.” The company noted that it remains “committed to minimizing hallucinations and continually improving all our models.”

It’s not the first time we’ve seen hallucinations bubble up in the news: In early October, Deloitte had to refund the Australian government for a report littered with AI hallucinations. And in May, Anthropic’s lawyers filed documentation with a hallucinated footnote in the AI firm’s legal battle with music publishers.

Hallucinations come about when models answer questions on topics outside of the data they were trained on, thereby making up answers. One solution to limiting hallucinations is allowing models to simply abstain from answering questions they don’t know, as proposed in an OpenAI paper published in September. 

Still, despite efforts to curtail hallucinations, actually getting rid of them is likely impossible. Though Blackburn’s letter calls on Google to “Shut it down until you can control it,” the question of how much hallucination AI users can tolerate remains unanswered.