Chatting Crimes

10928760852?profile=RESIZE_400xAt the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses.  However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyber-attacks.

Check Point Research’s (CPR) previously reported and described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English.  The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes.[1]

CPR’s analysis of several major underground hacking communities shows that there are already first instances of cyber criminals using OpenAI to develop malicious tools.  As suspected, some of the cases clearly showed that many cyber criminals were using OpenAI who have no development skills at all.  Although the tools presented are pretty basic, it is only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad purposes.

10928772458?profile=RESIZE_400xFigure 1. Cybercriminal showing how he created infostealer using ChatGPT

Case 1 – Creating Infostealer: On 29 December 2022, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum. The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.  As an example, he shared the code of a Python-based stealer that searches for common file types, copies them to a random folder inside the Temp folder, ZIPs them and uploads them to a hardcoded FTP server.

Analysis showed the script confirmed the cybercriminal’s claims.  This is indicated a basic stealer which searches for 12 common file types (such as MS Office documents, PDFs, and images) across the system.  If any files of interest are found, the malware copies the files to a temporary directory, zips them, and sends them over the web.  It is worth noting that the actor did not bother encrypting or sending the files securely, so the files might end up in the hands of 3rd parties.

10928775664?profile=RESIZE_400xFigure 2. Proof of how he created Java program that downloads PuTTY and runs it using Powershell

The second sample this actor created using ChatGPT is a simple Java fragment. It downloads PuTTY, a very common SSH and telnet client, and runs it covertly on the system using Powershell.  This script can of course be modified to download and run any program, including common malware families.

This threat actor’s prior forum participation includes sharing several scripts like automation of the post-exploitation phase, and a C++ program that attempts to phish for user credentials.  In addition, he actively shares cracked versions of SpyNote, an Android RAT malware.  So, this individual seems to be a tech-oriented threat actor, and the purpose of his posts is to show less technically capable cyber criminals how to utilize ChatGPT for malicious purposes, with real examples they can immediately use.

Case 2 – Creating an Encryption Tool:  On 21 December 2022, a threat actor called USDoD posted a Python script, which he emphasized was the first script he ever created.

10928774897?profile=RESIZE_400xFigure 3.  Cybercriminal called USDoD posts multi-layer encryption tool

When another cybercriminal commented that the style of the code resembles openAI code, USDoD confirmed that the OpenAI gave him a “nice [helping] hand to finish the script with a nice scope.”

10928775273?profile=RESIZE_400xFigure 4.  Confirmation that the multi-layer encryption tool was created using Open AI

 

Our analysis of the script verified that it is a Python script that performs cryptographic operations.  To be more specific, it is actually a mixture of different signing, encryption and decryption functions.  The script seems benign, but it implements a variety of different functions:

  • The first part of the script generates a cryptographic key (specifically uses elliptic curve cryptography and the curve ed25519), that is used in signing files.
  • The second part of the script includes functions that use a hard-coded password to encrypt files in the system using the Blowfish and Twofish algorithms concurrently in a hybrid mode. These functions allow the user to encrypt all files in a specific directory or a list of files.
  • The script also uses RSA keys, uses certificates stored in PEM format, MAC signing, and blake2 hash function to compare the hashes etc.

All the decryption counterparts of the encryption functions are implemented in the script as well.  The script includes two main functions; one which is used to encrypt a single file and append a message authentication code (MAC) to the end of the file and the other encrypts a hardcoded path and decrypts a list of files that it receives as an argument.

All of the above described code can of course be used in a benign fashion.  However, this script can easily be modified to encrypt someone’s machine completely without any user interaction.  For example, it can potentially turn the code into ransomware if the script and syntax problems are fixed.

While it seems that UsDoD is not a developer and has limited technical skills, he is a very active and reputable member of the underground community.  UsDoD is engaged in a variety of illicit activities that includes selling access to compromised companies and stolen databases.  A notable stolen database USDoD shared recently was allegedly the leaked InfraGard database.

10928774455?profile=RESIZE_400xFigure 5.  USDoD previous illicit activity that involved publication of InfraGard Database

10928778875?profile=RESIZE_400xFigure 6.  Threat actor using ChatGPT to create DarkWeb Market scripts

Case 3 – Facilitating ChatGPT for Fraud Activity:  Another example of the use of ChatGPT for fraudulent activity was posted on New Year’s Eve of 2022, and it demonstrated a different type of cybercriminal activity.  While our first two examples focused more on malware-oriented use of ChatGPT, this example shows a discussion with the title “Abusing ChatGPT to create Dark Web Marketplaces scripts.”  In this thread, the cybercriminal shows how easy it is to create a Dark Web marketplace, using ChatGPT. The marketplace’s main role in the underground illicit economy is to provide a platform for the automated trade of illegal or stolen goods like stolen accounts or payment cards, malware, or even drugs and ammunition, with all payments in cryptocurrencies.  

To illustrate how to use ChatGPT for these purposes, the cybercriminal published a piece of code that uses third-party API to get up-to-date cryptocurrency (Monero, Bitcoin and Etherium) prices as part of the Dark Web market payment system.

At the beginning of 2023, several threat actors opened discussions in additional underground forums that focused on how to use ChatGPT for fraudulent schemes.  Most of these focused on generating random art with another OpenAI technology (DALLE2) and selling them online using legitimate platforms like Etsy.  In another example, the threat actor explains how to generate an e-book or short chapter for a specific topic (using ChatGPT) and sells this content online.

10928778285?profile=RESIZE_400xFigure 7.  Multiple threads in the underground forums on how to use ChatGPT for fraud activity

Summary:  It is still too early to decide whether or not ChatGPT capabilities will become the new favorite tool for participants in the Dark Web.  However, the cyber criminal community has already shown significant interest and are jumping into this latest trend to generate malicious code.  

Finally, there is no better way to learn about ChatGPT abuse than by asking ChatGPT itself. So we asked the chatbot about the abuse options and received a pretty interesting answer:

10928770492?profile=RESIZE_584x Figure 8.  ChatGPT response about how threat actors abuse openAI

Interesting, huh?  Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@wapacklabs.com      

 Weekly Cyber Intelligence Briefings:

  • Reporting: https://www. redskyalliance. org/   
  • Website: https://www. wapacklabs. com/  
  • LinkedIn: https://www. linkedin. com/company/64265941   

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://attendee.gotowebinar.com/register/5504229295967742989  

[1] https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!