The ChatGPT AI chatbot has created plenty of excitement in the short time it has been available and now it seems it has been used by cyber threat actors to help them develop malicious code. ChatGPT is an AI-driven natural language processing tool which interacts with users in a human-like, conversational way. There are other uses, such as it can be used to help with writing assignments like composing emails, essays and Python code. ChatGPT did not write this article.
The chatbot tool was released by artificial intelligence research laboratory OpenAI https://openai.com in November 2022 and it has generated widespread interest and discussion over how AI is developing and how it could be used going forward. But like any other tool, in the wrong hands it could be used for nefarious purposes; and cybersecurity researchers say the users of underground hacking communities are already experimenting with how ChatGPT might be used to help facilitate cyberattacks and support malicious operations.
Threat actors with very low technical knowledge up to zero tech knowledge could be able to create malicious tools. It could also make the day-to-day operations of sophisticated cybercriminals much more efficient and easier by creating different parts of the infection chain. OpenAI's terms of service specifically ban the generation of malware, which it defines as "content that attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm". It also bans attempts to create spam, as well as use cases aimed at cybercrime.
During a recent analysis of activity in several major underground hacking forums suggests that cyber criminals are already using ChatGPT to develop malicious tools and in some cases, it is already allowing low-level cyber criminals with no development or coding skills to create malware. In one forum thread which appeared at the end of December 2022, the poster described how they were using ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.
By doing this, they have been able to create a Python-based information stealer malware which searches for common files including Microsoft Office documents, PDFs and images, copies them then uploads them to a file transfer protocol server. The same user also demonstrated how they had used ChatGPT to create Java-based malware, which using PowerShell could be harnessed to covertly download and run other malware onto infected systems.
Researchers note that the forum user making these threads appears to be "tech-oriented" and shared the posts to show less technically capable cybercriminals how to utilize AI tools for malicious purposes, complete with real examples of how it can be done.
One user posted a Python script, which they said was the first script they ever created. After discussion with another forum member, they said that ChatGPT helped them to create it. Analysis of the script suggest it is designed to encrypt and decrypt files, something that with some additional development, could be made into ransomware, potentially leading to the prospect of low-level cyber criminals developing and distributing their own extortion campaigns.
It is not only malware development which cyber criminals are experimenting with ChatGPT for; on New Year's Eve 2022, one underground forum member posted a thread demonstrating how they had used the tool to create scripts which could be operate an automated dark web marketplace for buying and selling stolen account details, credit card information, malware and more. The cyber actor even showed off a piece of code that was generated using a third-party API to to get up-to-date prices for Monero, Bitcoin and Ethereum cryptocurrencies as part of a payment system for a dark web marketplace.
It is difficult to tell if malicious cyber activity generated with the aid of ChatGPT is actively functioning in the wild, because cyber threat actors can be liars and like to boast about their “successes.” From a technical stand point it is extremely difficult to know whether a specific malware was written using ChatGPT or not. As interest in ChatGPT and other AI tools grows, they are going to attract the attention of cyber criminals and fraudsters looking to exploit the technology to help conduct malicious campaigns at low-cost and with the least effort necessary.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@wapacklabs. com
Reporting: https://www.redskyalliance.org/
Website: https://www.wapacklabs.com/
LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://attendee.gotowebinar.com/register/3702558539639477516
Comments