AI is making the threat landscape more complex by the day.
On Wednesday, Google’s Threat Intelligence Group published findings showing a shift in how AI is used by cybersecurity threat actors. Rather than simply using it for productivity gains, AI-powered malware has become a growing part of threat actors’ strategies.
According to Google’s AI Threat Tracker, AI is being used to “dynamically alter behavior mid-execution” of malware.
- The intelligence group identified certain families of malware that use large language models during their execution, generating malicious scripts and hiding their own code to evade detection.
- These threat actors will also use AI models to create malicious functions “on demand,” according to the report. “While still nascent, this represents a significant step toward more autonomous and adaptive malware,” Google said in the report.
Additionally, AI tools like Gemini are also being abused across the attack lifecycle, including by state-sponsored actors such as North Korea, Iran, and the People's Republic of China, according to the report.
More illicit AI tools have emerged in underground marketplaces this year, including phishing kits, deepfake generators and vulnerability-spotting tools, “lowering the barrier to entry for less sophisticated actors,” the report said.
But given the wide availability of AI tools, these emerging threats aren’t surprising, Cory Michal, chief security officer of AppOmni, told The Deep View. AI has long powered both the cybersecurity defenses and threat actors alike. As these tools get better, so do the people utilizing them. “Threat actors are leveraging AI to make their operations more efficient and sophisticated, just as legitimate teams use AI to improve productivity,” he said.
“AI doesn’t just make phishing emails more convincing, it makes intrusion, privilege abuse, and session theft more adaptive and scalable,” Michal noted.




