ajor AI firms are sounding the alarm on secondhand models.
On Thursday, OpenAI sent a memo to US lawmakers warning them that Chinese AI firm DeepSeek is using distillation techniques to “free-ride” on the capabilities of OpenAI's models, as well as those of other frontier labs. The firm says DeepSeek is using “obfuscated methods” to undercut OpenAI’s defenses.
Open AI memo claims that Chinese LLM providers and university research labs are using its models in such a way that would be “highly beneficial” in creating competitor models through distillation.
- OpenAI also has observed accounts associated with DeepSeek employees using methods to “circumvent” access restrictions.
- Although OpenAI has added safeguards to prevent this distillation, the company claims that these techniques are becoming more sophisticated as a result.
- Although distillation is a commonly used technique in AI training, OpenAI claims that doing this under the radar can result in models that are missing key guardrails, resulting in “dangerous outputs in high-risk domains.”
“It’s important to note that there are legitimate use cases for distillation … However, we do not allow our outputs to be used to create imitation frontier AI models that replicate our capabilities,” OpenAI said in the memo.
And OpenAI isn’t alone in calling out these risks. On Thursday, Google’s Threat Intelligence Group published a report detailing a flood of “commercially motivated” actors seeking to clone its flagship model, Gemini. The company said in the report that these actors are using “distillation attacks,” in which they prompt Gemini thousands of times as a means of learning how it works to bolster their own models.
Though Google didn’t call out any specific group in its report, the company said it “observed and mitigated frequent model extraction attacks from private sector entities all over the world and researchers seeking to clone proprietary logic.”




