Exploited ChatGPT Bug

13528383283?profile=RESIZE_400xA server-side request forgery vulnerability in OpenAI's chatbot infrastructure can allow attackers to direct users to malicious URLs, leading to a range of threat activity.  Attackers are actively exploiting a flaw in ChatGPT that allows them to redirect users to malicious URLs from within the artificial intelligence (AI) chatbot application, with more than 10,000 exploit attempts in a week coming from a single malicious IP address.

Researchers from Veriti discovered the vulnerability in OpenAI’s ChatGPT infrastructure, which is tracked as CVE-2024-27564 (CVSS 6.5).  The flaw has not been widely reported, perhaps because it was only deemed of medium severity. This marginal-risk assessment may be misleading, however, as the flaw is proving to be on attackers' radar screens.  And of the organizations that Veriti analyzed, 35% of them were at risk because of misconfigurations in intrusion prevention systems (IPS), Web application firewalls (WAFs), and firewall settings, Veriti reported in a blog post.

A number of the cyber-attacks are focused mainly in the US, where financial institutions are prime targets, the researchers added.  "This vulnerability has already become a real-world attack vector, proving that severity scores don't dictate actual risk," according to the post by Veriti Research.  "No vulnerability is too small to matter; attackers will exploit any weakness they can find."

CVE-2024-27564 is a server-side request forgery (SSRF) found in pictureproxy.php of ChatGPT commit f9f4bbc, according to its listing on the National Vulnerability Database (NVD) of the National Institute of Standards and Technology (NIST).  The vulnerability "allows attackers to force the application to make arbitrary requests via injection of crafted URLs into the urlparameter," according to the listing.  Attackers can use the flaw to inject malicious URLs into ChatGPT input parameters, forcing the application to make unintended requests on their behalf.

Thirty-three percent of the more than 10,000 attack attempts were in the US, while 7% each occurred in Germany and Thailand.  Attackers also targeted organizations in Indonesia, Colombia, and the UK.  While finance was the top industry in the crosshairs of cyber-attackers, they also targeted government and healthcare organizations.

Financial organizations likely are the prime target due to their dependency on AI-driven services and API integrations, "making them vulnerable to SSRF attacks that access internal resources or steal sensitive data," according to Veriti.  Attacks on these organizations could lead to a range of bad outcomes for victim organizations, including unauthorized transactions, regulatory penalties, and reputational damage, the researchers noted.

When ChatGPT introduced the generative AI (GenAI) technology era in November 2023, it also introduced a new attack surface for adversaries that they quickly began to exploit.  As organizations begin to adopt AI throughout their enterprises to improve their business processes and efficiencies, adversarial attacks on AI systems remain one of the top security concerns, according to research by security firm SentinelOne published in October 2024.

Recent findings show that ChatGPT can expose significant data pertaining to its instructions, history, and the files it runs on.  That report also raised questions about the security of OpenAI's generative AI large language model (LLM) overall.  Indeed, ChatGPT and other chatbots have proven surprisingly easy for attackers to manipulate for nefarious purposes, keeping enterprise security teams on alert.

Veriti included a list of IP addresses from which attacks on CVE-2024-27564 initiated to help defenders with remediation.  Admins should also monitor logs for attack attempts from the IPs as part of their remediation, according to the post.   Veriti also recommended that organizations check their IPS, WAF, and firewall configurations for protection against the flaw, as well as prioritize AI-related security gaps in risk assessments to defend against attacks on their use of AI within the enterprise.

Source: https://www.darkreading.com/cyberattacks-data-breaches/actively-exploited-chatgpt-bug-organizations-risk

This article is shared at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC).  For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com    

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://register.gotowebinar.com/register/5207428251321676122

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!