AI add-ons steal chat data from 900K users

By
Aaron Mok

Jan 7, 2026

12:30pm UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
L

ooking for an AI extension for your web browser? You may want to think twice.

In late December, cybersecurity firm OX Security identified two Google Chrome plug-ins that secretly siphoned user conversations with popular AI chatbots to attacker-controlled servers. The extensions — “Chat GPT for Chrome with GPT‑5, Claude Sonnet & DeepSeek AI” and “AI Sidebar with Deepseek, ChatGPT, Claude and more” — add a sidebar to Chrome that lets users interact with multiple frontier models directly on their browser. The malware ran silently in the background, extracting browser activity and chatbot conversations every 30 minutes, an attack known as data exfiltration.

Together, these extensions have been downloaded more than 900,000 times, exposing a trove of sensitive chatbot conversations, including personal information, company secrets, and customer details, to an unknown threat actor.

“Threat actors holding this information can use it for a variety of purposes like stalking, doxxing, selling information, corporate espionage, and extortion,” Moshe Siman Tov Bustan, a security researcher team lead at OX Security, told The Deep View.

Once labeled as “Featured” in the Chrome Web Store, the two extensions are impostors that mimic a legitimate AITOPIA extension with a nearly identical name.

According to OX Security’s assessment, the AITOPIA extension keeps user queries private and processes them on Amazon-hosted infrastructure as part of its normal operations. The malicious lookalikes, however, claim to collect “anonymous, non-identifiable analytics data,” but instead exfiltrate user conversations with ChatGPT and DeepSeek.

OX Security reported the extensions to Google on December 29. As of January 6, they remain available on the Chrome Web Store. Bustan urges users to uninstall them immediately.

To avoid malware, he recommends being cautious about extensions that request broad permissions and checking metadata — the developer’s email, website, and privacy policy — to spot anything that doesn’t pass a gut check.

Our Deeper View

The recent AI exfiltration attacks offer a glimpse into the cybersecurity storms ahead. As AI evolves in lockstep with cyber threats, attackers will continue to find new ways to compromise internet security. Since AI makes developing complex malware easier, Bustan has no doubts that AI-enabled attacks will become “much more common.” Prompt injection, phishing scams, and other emerging exploits threaten both individuals and enterprises. Organizations will need better techniques and stronger defenses to keep up with the proliferation of new attack vectors created by the ease and speed of malware development with AI.