Hackers, criminals, and spies are rapidly adopting Artificial Intelligence (AI), and considerable evidence is emerging of a substantial acceleration in AI-enabled crime. This includes evidence of the use of AI tools for financial crime, phishing, distributed denial of service (DDoS), child sexual abuse material (CSAM), and romance scams. In all these areas, criminal use of AI is already augmenting revenue generation and exacerbating financial and personal harms. Scammers and social engineers, the individuals in hacking operations who pretend to be someone else or write convincing phishing emails, have been using LLMs to appear more convincing.[1]
Russian hackers are adding a new angle to the massive amounts of phishing emails sent to Ukrainians. In one such exploit. Hackers are now including an attachment containing an AI program, and if it is installed, it automatically searches the victims’ computers for sensitive files that can be exfiltrated to the sender. This campaign, first uncovered by the Ukrainian government, is the first known instance of Russian intelligence being caught creating malicious code using Large Language Models (LLMs), a type of AI chatbot that has become ubiquitous in the corporate world.
LLMs, such as ChatGPT, are still prone to errors, but AI hackers have become proficient at processing language instructions and translating plain language into computer code, as well as identifying and summarizing documents. The technology has so far not revolutionized hacking by turning complete novices into experts, and it has not yet allowed would-be cyber terrorists to shut down the electric grid. It is making skilled hackers better and faster. Cybersecurity firms and researchers are also utilizing AI, contributing to an escalating cat-and-mouse game between offensive hackers who identify and exploit software flaws and the defenders who attempt to fix them first.
The shift is only now starting to catch up with the hype that has permeated the cybersecurity and AI industries for years, especially since the introduction of ChatGPT to the public in 2022. Those tools have not always proven effective, and some cybersecurity researchers have complained about would-be hackers using fake vulnerability findings generated by AI.
Hackers and cybersecurity professionals have not determined whether AI will ultimately help attackers or defenders more. However, now the defense appears to be winning. That trend may not hold as technology continues to evolve, however. One reason is that, to date, there is no free-to-use automatic hacking tool or penetration tester that incorporates AI. Such tools are already widely available online, nominally as programs that test flaws in practices used by criminal hackers.
This is a key motivating factor behind the recommendation to establish a new AI Crime Taskforce within the British Agency’s Cyber Crime Unit to coordinate the national response to AI-enabled crime. The collation of data from across law enforcement to monitor and log the use of AI by criminal groups, and the mapping of bottlenecks in the criminal adoption of AI tools to raise barriers to adoption, would be crucial to developing law enforcement’s capability to respond to an evolving AI threat landscape.
This article is shared with permission at no charge for educational and informational purposes only.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC). For questions, comments, or assistance, please contact the office directly at 1-844-492-7225 or feedback@redskyalliance.com
- Reporting: https://www.redskyalliance.org/
- Website: https://www.redskyalliance.com/
- LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://register.gotowebinar.com/register/5207428251321676122
[1] https://www.cybersecurityintelligence.com/blog/ai-is-reshaping-online-crime-8658.html
Comments