New Malware uses AI to Adapt during Attacks

30984543477?profile=RESIZE_400xState-backed hackers are for the first time deploying malware that uses large language models during execution, allowing them to dynamically generate malicious scripts and evade detection, according to new research.  Although cybersecurity experts have observed hackers use AI in recent years to do things like increase the number of victims they reach, researchers at Google said recently that they recently observed malware "that employed AI capabilities mid-execution to dynamically alter the malware's behavior."  The trend should be considered a “significant step towards more autonomous and adaptive malware,” the report says.[1]

In June 2025, researchers found experimental dropper malware tracked as PROMPTFLUX that prompts an LLM to rewrite its own source code to evade detection.  PROMPTFLUX, which Google said has taken steps to disrupt, appears to be in a testing phase and does not have the ability to compromise victim networks or devices, according to the report.

PROMPTSTEAL was used in June by Russia-linked APT28 (also known as BlueDelta, Fancy Bear and FROZENLAKE) against Ukrainian targets and utilized LLMs to generate commands rather than having them hard-coded into the malware.  The incident marked Google's "first observation of malware querying a LLM deployed in live operations," the report said.

While researchers called these methods experimental, they said they show how threats are changing and how threat actors can “potentially integrate AI capabilities into future intrusion activity.  Attackers are moving beyond ‘vibe coding’ and the baseline observed in 2024 of using AI tools for technical support,” the report says.

The marketplace for AI tools “purpose-built” to fuel criminal behavior is growing, the report added.  Low-level criminals without a lot of technical expertise or money can now find effective tools in underground forums for enhancing the complexity and reach of attacks, according to the report.  “Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance for prospective customers interested in their offerings,” the report says.

This article is shared with permission at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC).  For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com    

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://register.gotowebinar.com/register/5207428251321676122

[1] https://therecord.media/new-malware-uses-ai-to-adapt/

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!