Weaponizing AI in Cyber-Attacks

12390146467?profile=RESIZE_400xIt is no longer theoretical; the world's major powers are working with large language models to enhance offensive cyber operations.  Advanced persistent threats (APTs) aligned with China, Iran, North Korea, and Russia use large language models (LLMs) to enhance their operations.  New blog posts from OpenAI and Microsoft reveal that five prominent threat actors have used OpenAI software for research, fraud, and other malicious purposes.  After identifying them, OpenAI shuttered all their accounts.  Though the prospect of AI-enhanced nation-state cyber operations might at first seem daunting, there is good news: none of these LLM abuses observed so far have been particularly devastating.  "Current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool," Microsoft noted in its report.  "Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI.  Current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool," Microsoft noted in its report. "Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors' usage of AI."[1]

The nation-state APTs using OpenAI today are among the world's most notorious.  Consider the group Microsoft tracks as Forest Blizzard, but it is better known as Fancy Bear.  The Democratic National Committee (DNC) – hacking; Ukraine-terrorizing; Main Directorate of the General Staff of the Armed Forces of the Russian Federation (GRU)-affiliated military unit has been using LLMs for basic scripting tasks, file manipulation, data selection, multiprocessing, and so on as well as intelligence gathering, researching satellite communication protocols, and radar imaging technologies, likely as they pertain to the ongoing war in Ukraine.

See:  https://redskyalliance.org/xindustry/fancy-bear-imposters-us-election

Two Chinese state actors have been ChatGPT-ing lately: Charcoal Typhoon (aka Aquatic Panda, ControlX, RedHotel, BRONZE UNIVERSITY), and Salmon Typhoon (aka APT4, Maverick Panda).  The former has been making good use of AI for both pre-compromise malicious behaviors, gathering information about specific technologies, platforms, and vulnerabilities, generating and refining scripts, and generating social engineering texts in translated languages as well as post-compromise, performing advanced commands, achieving deeper system access, and gaining control in systems.

Salmon Typhoon has primarily focused on LLMs as an intelligence tool, sourcing publicly available information about high-profile individuals, intelligence agencies, internal and international politics, and more.  It has also largely unsuccessfully attempted to abuse OpenAI to help develop malicious code and research stealth tactics.

Iran's Crimson Sandstorm (Tortoiseshell, Imperial Kitten, Yellow Liderc) is using OpenAI to develop phishing material emails pretending to be from an international development agency, for example, or a feminist group, as well as code snippets to aid their operations for web scraping, executing tasks when users sign in to an app, and so on.

See:  https://redskyalliance.org/xindustry/more-bad-kittens

Finally, there is Kim Jong-Un's Emerald Sleet (Kimsuky, Velvet Chollima), which, like the other APTs, turns to OpenAI for basic scripting tasks, phishing content generation, and researching publicly available information on vulnerabilities, as well as expert think tanks, and government organizations concerned with defense issues and its nuclear weapons program.

See:  https://redskyalliance.org/xindustry/kimsuky-again

If these many malicious uses of AI seem helpful, but not science fiction-level relaxed, there's a reason why.  "Threat actors that are effective enough to be tracked by Microsoft are likely already proficient at writing software," Joseph Thacker, principal AI engineer and security researcher at AppOmni, explains.  "Generative AI is amazing, but it's mostly helping humans be more efficient rather than making breakthroughs. I believe those threat actors are using LLMs to write code (like malware) faster, but it's not noticeably impactful because they already have malware.  They still have malware. They may be able to be more efficient, but at the end of the day, they aren't doing anything new yet."

Though cautious not to overstate its impact, Thacker warns that AI still offers advantages for attackers.  "Bad actors will likely be able to deploy malware at a larger scale or on systems they previously didn't have support for. LLMs are pretty good at translating code from one language or architecture to another.  So, I can see them converting their malicious code into new languages they previously weren't proficient in," he says.  Further, "if a threat actor found a novel use case, it could still be in stealth and not detected by these companies yet, so it's not impossible.  I have seen fully autonomous AI agents that can 'hack' and find real vulnerabilities, so if any bad actors have developed something similar, that would be dangerous."  For those reasons, he adds, "Companies can remain vigilant. Keep doing the basics right."

 

This article is presented at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  Call for assistance.  For questions, comments, a demo, or assistance, please get in touch with the office directly at 1-844-492-7225 or feedback@redskyalliance.com   

 

Reporting: https://www.redskyalliance.org/

Website: https://www.redskyalliance.com/

LinkedIn: https://www.linkedin.com/company/64265941

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://attendee.gotowebinar.com/register/5993554863383553632

 

[1] https://www.darkreading.com/threat-intelligence/microsoft-openai-nation-states-are-weaponizing-ai-in-cyberattacks

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!