he backlash against Grok for its image generation is reaching a fever pitch.
On Wednesday, California Attorney General Rob Bonta announced that the state department of justice would investigate Elon Musk’s xAI over Grok's production and distribution of AI-generated, nonconsensual sexualized images.
Following an “avalanche of reports” in recent weeks, Bonta urged the firm to take “immediate action” to prevent the further proliferation of these images. “We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or of child sexual abuse material,” he said in a statement.
- Gov. Gavin Newsom supported Bonta’s investigation, calling X “a breeding ground for predators to spread nonconsensual sexually explicit AI deepfakes, including images that digitally undress children.”
- Earlier this week, the British government also opened an inquiry into the matter for potentially breaking online safety laws and regulations.
- Musk, however, denied culpability, claiming that he was “unaware” that Grok was creating and spreading “naked underage images.”
- Both Copyleaks researchers and NPR journalists have reported that Grok has modified some prompts to tone down, if not eliminate, some of the sexually explicit expicit material.
Grok’s image generation scandal isn’t the first time we’ve seen an AI firm potentially put children at risk. Several firms have come under fire for the risks that their technology poses to children, with Google and Character.AI settling lawsuits recently that implicate their chatbots in mental health crises and suicides. OpenAI, also caught in a lawsuit relating to the death of a minor, has partnered with advocacy organization Common Sense Media to back the Parents & Kids Safe AI Act, targeting youth chatbot use.




