New data raises red flags for ChatGPT at work

By
Nat Rubio-Licht

Jan 18, 2026

9:34pm UTC

Copy link
Share on X
Share on LinkedIn
Share on Instagram
Share via Facebook
A

s much as enterprises claim to be AI-ready, the most popular AI app might not be enterprise-ready.

Data published last week by Harmonic Security found that ChatGPT accounted for the vast majority of enterprise data vulnerabilities among AI apps in 2025. The research, which analyzed 22.4 million prompts across more than 660 AI apps, found that ChatGPT accounted for more than 71% of data exposures — while only accounting for 43.9% of the prompts.

Six tools in total — ChatGPT, Microsoft Copilot, Harvey, Google Gemini, Anthropic's Claude, and Perplexity — accounted for 92.6% of information exposure.

And as AI firms are desperate to capture the enterprise market, they may be shipping products so quickly that they let vulnerabilities slip through the cracks. Anthropic’s buzzy new release, Claude Cowork, reportedly has a vulnerability that allows attacks to transmit sensitive files to their Anthropic account using prompt injection attacks.

Still, AI model providers are desperate to get in good with enterprises. Anthropic and Google have both inked deal after deal with major business customers. OpenAI, meanwhile, is winning with startups and small businesses.

OpenAI and Anthropic, in particular, are both vying to turn a profit before the turn of the decade, especially as they continue to spend billions in expansion and infrastructure. Given the fickle nature of the consumer market, they likely view enterprise as critical to achieving that goal.

Our Deeper View

All the major AI products from the big tech vendors share the same drawbacks baked into their DNA: hallucinations and data security risks. These problems aren’t singular to any AI company, and still haven’t been entirely solved, but ChatGPT is currently the worst offender. The real question is how much trust enterprises can place in these systems. The duty to protect against security risks doesn’t lie solely with AI companies, but also with the organizations that deploy these tools.