31040395500?profile=RESIZE_400xSentinelLABS has been researching how large language models (LLMs) are impacting cybersecurity for both defenders and adversaries.  As part of our ongoing efforts in this area and our well-established research and tracking of crimeware actors, researchers have been closely following the adoption of LLM technology among ransomware operators.  Analysts have observed that three structural shifts appear to be unfolding in parallel.

First, the barriers to entry continue to fall for those intent on cybercrime.  LLMs allow low- to mid-skill actors to assemble functional tooling and ransomware-as-a-service (RaaS) infrastructure by decomposing malicious tasks into seemingly benign prompts that can slip past provider guardrails.[1]  Second, the ransomware ecosystem is splintering.  The era of mega-brand cartels (LockBitContiREvil) has faded under sustained law enforcement pressure and sanctions.  In their place, we see a proliferation of small, short-lived crews, Termite, Punisher, The Gentlemen, and Obscura, operating under the radar, alongside a surge in mimicry and false claims, such as fake Babuk2 and confused ShinyHunters branding.  Third, the line between APT and crimeware is blurring.  State-aligned actors are moonlighting as ransomware affiliates or using extortion for operational cover, while culturally motivated groups like “The Com” are buying into affiliate ecosystems, adding noise and complicating attribution, as we saw with groups such as DragonForce, Qilin, and previously BlackCat/ALPHV.

While these three structural shifts were to a certain extent in play before the widespread availability of LLMs, researchers observe that all three are accelerating simultaneously.  To understand the mechanics, we examined how LLMs are being integrated into day-to-day ransomware operations.

Note that the threat intelligence community’s understanding of exactly how threat actors integrate LLMs into attacks is severely limited.  The primary sources that furnish information on these attacks are the intelligence teams of LLM providers via periodic reports and, more rarely, victims of intrusions who find artifacts of LLM use.  As a result, it is easy to overinterpret a small number of cases as indicative of a revolutionary change in adversary tradecraft.  Sentinel assesses that such conclusions exceed the available evidence.  Found instead was that, while the use of LLMs by adversaries is undoubtedly a significant trend, as we detail throughout this report, this reflects operational acceleration rather than a fundamental transformation in attacker capabilities.

How AI Is Changing Ransomware Operations Today / Direct Substitutions from Enterprise Workflows - The most immediate impact comes from ransomware operators adopting the same LLM workflows that legitimate enterprises use every day, only repurposed for crime.  In the same way that marketers use LLMs to write copy, threat actors use them to draft phishing emails and localized content, such as ransom notes, using the same language as the victim company.  Enterprises take advantage of LLMs to refine large amounts of data for sales operations while threat actors use the same workflow to identify lucrative targets from dumps of leaked data or how to extort a specific victim based on the value of the data they steal.

This data triage capability is particularly amplified across language barriers.  A Russian-speaking operator might not recognize that a file named “Fatura” (Turkish for “Invoice”) or “Rechnung” (German) contains financially sensitive information. LLMs eliminate this blind spot.  With LLMs, attackers can instruct a model to “Find all documents related to financial debt or trade secrets” in Arabic, Hindi, Spanish, or Japanese.  Research shows LLMs significantly outperform traditional tools in identifying sensitive data in non-English languages.

The pattern holds across other enterprise workflows as well.  In each case, the effect is the same: competent crews become faster and can operate across more tech stacks, languages, and geographies, while new entrants reach functional capability sooner. Importantly, what we are not seeing is any fundamentally new category of attack or novel capability.

Local Models to Evade Guardrails - Actors are increasingly breaking down malicious tasks into “non-malicious,” seemingly benign fragments.  Often, actors spread requests across multiple sessions or prompt various models, then stitch code together offline.  This approach dilutes potential suspicion from LLM providers by decentralizing malicious activity.

There is a clear, increasing trend of actors using open models for nefarious purposes.  Local, fine-tuned, open-source Ollama models offer more control, minimize provider telemetry, and have fewer guardrails than commoditized LLMs.  Early proof-of-concept (PoC) LLM-enabled ransomware tools like PromptLock may be clunky, but the direction is clear: once optimized, local and self-hosted models will be the default for higher-end crews.

Cisco Talos and others have flagged criminals gravitating toward uncensored models, which offer fewer safeguards than frontier labs and typically omit security controls like prompt classification, account telemetry, and other abuse-monitoring mechanisms in addition to being trained on more harmful content.  As adoption of these open-source models accelerates and as they are explicitly fine-tuned for offensive use cases, defenders will find it increasingly challenging to identify and disrupt abuse originating from models that are customized for or directly operated by adversaries.

Documented Use of AI in Offensive Operations / Automated Attacks via Claude Code - Some recent campaigns illustrate our observations of how LLMs are actively being used and how they may be incorporated to accelerate attacker tradecraft.

In August 2025, Anthropic’s Threat Intelligence team reported on a threat actor using Claude Code to perform a highly autonomous extortion campaign.  This actor automated not only the technical and reconnaissance aspects of the intrusion but also instructed Claude Code to evaluate what data to exfiltrate, the ideal monetary ransom amount, and to curate the ransom note demands to maximize impact and coax the victims into paying.

The actor’s prompt apparently guided Claude to accept commands in Russian and instructed the LLM to maintain communications in this language.  While Anthropic does not state the final language used for creating ransom notes, SentinelLABS assesses that the subsequent prompts likely generated ransom notes and customer communications in English, as ransomware actors typically avoid targeting organizations within the Commonwealth of Independent States (CIS).

This campaign presents an impressive degree of LLM-enabled automation that furthers actors’ offensive security, data analysis, and linguistic capabilities.  While typical, well-resourced ransomware groups could achieve each step alone, the Claude Code-enabled automation flow required far fewer human resources.

Malware Embedding Calls to LLM APIs - SentinelLABS’ research on LLM-enabled threats brought MalTerminal to light, a PoC tool that stitches together multiple capabilities, including ransomware and a reverse shell, through prompting a commercial LLM to generate the code.

Relics in MalTerminal strongly suggested that a security researcher or company developed this tool; however, the capabilities were a very early iteration of how threat actors will incorporate malicious prompting into tools to further their attacks.

This tool bypassed safety filters to deliver a ransomware payload, proving that ransomware-focused actors can overcome provider guardrails not only at earlier stages like reconnaissance and lateral movement but also during the impact phase of a ransomware attack.

Abusing Victim’s Locally Hosted LLMs - In August 2025, Google Threat Intelligence researchers identified examples of QUIETVAULT. This stealer malware weaponizes locally installed AI command-line tools to enhance data exfiltration capabilities.  The JavaScript-based stealer searches for and leverages LLMs on macOS and Linux hosts by embedding a malicious prompt that instructs them to recursively search for wallet-related files and sensitive configuration data across the victim’s filesystem.


31040396482?profile=RESIZE_710xQUIETVAULT leverages locally hosted LLMs to enhance credentials and wallet discovery (see the Sentinel report for more details).

The prompt directs the local LLM to search standard user directories such as $HOME, ~/.config, and ~/.local/share, while avoiding system paths that could trigger errors or require elevated privileges.  In addition, it instructs the LLM to identify files matching patterns associated with various cryptowallets, including MetaMask, Electrum, Ledger, Trezor, Exodus, Trust Wallet, Phantom, and Solflare.

This approach demonstrates how threat actors are adapting to the proliferation of AI tools on victim workstations.  By leveraging the AI’s natural language understanding and file-system reasoning capabilities, the malware can conduct more intelligent reconnaissance than traditional pattern-matching algorithms.  Once sensitive files are discovered through AI-assisted enumeration, QUIETVAULT proceeds with traditional stealer functions.  It Base64-encodes the stolen data and attempts to exfiltrate it via newly created GitHub repositories using local credentials.

LLM-Enabled Exploit Development - There has been significant discourse surrounding LLM-enabled exploit development and how AI will accelerate the vulnerability-disclosure-to-exploit-development lifecycle.  As of this writing, credible reports of LLM-developed one-day exploits have been scarce and difficult to verify, though it is very likely that LLMs can help actors rapidly prototype pieces of exploit code and support them in stitching them together, plausibly resulting in a viable, weaponized version.  However, it is worth noting that LLM-enabled exploit development can be a double-edged sword: the December 2025 React2Shell vulnerability raised alarm when a PoC exploit circulated shortly after the vendor disclosed the flaw.  Credible researchers soon found that the exploit was not only non-viable but had been generated by an LLM.  Defenders should expect increased churn and a fatigue cycle driven by the rapid proliferation of LLM-enabled exploits, many of which are likely to be more hallucinatory than weaponized.

LLM-Assisted Social Engineering - Actor misuse of LLM provider brands to further social engineering campaigns remains a tried-and-true technique.  A campaign in December 2025 used a combination of chat-style LLM conversation-sharing features and search engine optimization (SEO) poisoning to direct users to LLM-generated tutorials that delivered the macOS Amos Stealer to the victim’s system.

Because the actors used prompt engineering techniques to insert attacker-controlled infrastructure into the chat conversation along with typical macOS software installation steps, these conversations were hosted on the LLM provider’s websites and their URLs were listed as sponsored search engine results under the legitimate LLM provider domain, for example, https://<llm_provider_name>[.]com.

These SEO-boosted results include conversations that instruct users to install the stealer under the guise of AI-powered software or routine operating system maintenance.  While Amos Stealer is not overtly linked to a ransomware group, it is well documented that infostealers play a crucial role in the initial access broker (IAB) ecosystem, which feeds operations for both small and large ransomware groups. While genuine incidents of macOS ransomware are virtually unknown, credentials stolen from Macs can be sold to enable ransomware or to gain access to corporate environments that contain systems with a higher predisposition to ransomware.  Additionally, operations supporting ransomware and extortion have begun offering AI-driven communication features to facilitate attacker-to-victim communication.  In mid-2025, Global Group RaaS started advertising its “AI-Assisted Chat”.  This feature claims to analyze data from victim companies, including revenue and historical public behavior, and then tailor the communication based on that analysis.

31040396490?profile=RESIZE_710xGlobal RaaS offering Ai-Assisted Chat

While Global RaaS does not restrict itself to specific sectors, to date, its attacks have disproportionately affected Healthcare, Construction, and Manufacturing.  What Sentinel observes was a pattern of LLMs accelerating execution, enabling automation through prompts and vibe-coding, streamlining repetitive tasks, and translating spoken language on the fly.

What’s Next for LLMs and Ransomware?  SentinelLABS is tracking several specific LLM-related patterns that we assess will become increasingly significant over the next 12–24 months.  Actors already chunk malicious code into benign prompts across multiple models or sessions, then assemble offline to dodge guardrails.  This workflow will become commoditized as tutorials and tooling proliferate, ultimately maturing into “prompt smuggling as a service”: automated harnesses that route requests across multiple providers when one model refuses, then stitch the outputs together for the attacker.

Early proof-of-concept LLM-enabled malware–including ransomware–will be optimized and take increasing advantage of local models, becoming stealthier, more controllable, and less visible to defenders and researchers.  Researchers expect ransomware operators to deploy templated negotiation agents: tone-controlled, multilingual, and integrated into RaaS panels.

Ransomware brand spoofing (fake Babuk2, ShinyHunters confusion) and false claims will increase and complicate attribution.  Threat actors’ ability to generate content at scale, along with plausible-sounding narratives via LLMs, will negatively impact defenders’ ability to stem the blast radius of attacks.

LLM use is also transforming the underlying infrastructure that drives extortive attacks.  This includes tools and platforms for applying pressure to victims, such as automated, AI-augmented calling platforms.  While peripheral to the tooling used to conduct ransomware and extortion attacks, these supporting tools accelerate threat actors' efforts.  Similar shifts are occurring with AI-augmented spamming tools used for payload distribution, like “SpamGPT, “” AIO Callcenter”: tools used by initial access brokers, who serve a key service in the ransomware ecosystem.

Conclusion - The widespread availability of large language models is accelerating the three structural shifts we identified: falling barriers to entry, ecosystem splintering, and the convergence of APT and crimeware operations.

These advances make competent ransomware crews faster and extend their reach across languages and geographies, while allowing novices to ramp up operational capabilities by decomposing complex tasks into manageable steps that models will readily assist with.  Malicious actors take this approach both out of technical necessity and to hide their intent.  As top-tier threat actors migrate to self-hosted, uncensored models, defenders will lose the visibility and leverage that provider guardrails currently offer.

With today’s LLMs, the risk is not superintelligent malware but industrialized extortion with more brilliant target selection, tailored demands, and cross-platform tradecraft that complicates response.  Defenders will need to adapt to a faster and noisier threat landscape, where operational tempo, not novel capabilities, defines the challenge.

 

This article is shared at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  We provide indicators-of-compromise information via a notification service (RedXray) or an analysis service (CTAC).  For questions, comments, or assistance, please get in touch with the office directly at 1-844-492-7225 or feedback@redskyalliance.com    

 Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://register.gotowebinar.com/register/5207428251321676122

 

[1] https://www.sentinelone.com/labs/llms-ransomware-an-operational-accelerator-not-a-revolution/

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!