Happy New Year AI

13378388267?profile=RESIZE_400xCyberattacks utilizing generative artificial intelligence (GenAI) technology as a tool are expected to grow next year, a government report reported recently.  In 2025, hacking groups are expected to increasingly use various generative AI models, such as ChatGPT, to create spear phishing emails customized to their attack subjects and fake news materials to be used for political propaganda, according to the annual cybersecurity report issued by the Ministry of Science and ICT.  “It will be difficult to tell the authenticity of sophisticated content created with the help of AI,” the report said, noting such content could rapidly spread across the internet and influence people’s judgment.  “The manipulation of public opinion by certain groups could lead to social conflict and confusion.”

The report also warned of a possible increase in blackmail cases using deepfake materials as witnessed in the country earlier this year, reports Yonhap news agency.  South Korea saw a series of deepfake sex crime cases in early 2024, with police having apprehended 506 suspects in such cases as of October.  Additionally, the report said activities of state-sponsored hacking groups may increase next year due to global political uncertainties following the re-election of Donald Trump as US president.  “The United States is expected to make policy shifts toward easing regulations on technology and virtual asset industries and promoting protectionism,” the report said, warning those changes could lead to an increase in attacks on cryptocurrency exchanges and companies developing strategic technologies, such as AI and quantum.

This year, cyber authorities saw a notable increase in cyberattacks on software supply chains using advanced ransomware attack methods and smishing attacks targeting the general public, according to the report.  While artificial intelligence agents are expected to lead the next wave of AI innovation, they’ll also empower cyber-attackers with a more potent set of tools to probe for an exploit vulnerability in enterprise defenses.  That’s according to Reed McGinley-Stempel, chief executive officer of identity platform startup Stytch Inc.  OpenAI LLC’s GPT-4 large language model, which debuted early this year, appears to be far more effective than its predecessors in identifying weaknesses in website security.  “AI should improve cybersecurity if you use it for the right reasons, but we’re seeing it move much faster on the other end, with attackers realizing that they can use agentic AI means to gain an advantage,” he said.[1]

A paper published in April by researchers at the University of Illinois Urbana-Champaign that found that GPT-4 can write complex malicious scripts to find vulnerabilities in Mitre Corp.’s list of Common Vulnerabilities and Exposures with an 87% success rate.[2]  A comparable experiment using GPT-3.5 had a success rate of 0%.  The paper said GPT-4 was able to follow up to 50 steps at one time in its probe for weaknesses.  That raises the specter of armies of AI agents pounding on firewalls constantly looking for cracks.  “GPT-4 now can effectively be an automated penetration tester for hackers,” McGinley-Stempel said.  “You could easily start to see agentic actions being chained together, with one agent recognizing the vulnerabilities and another focused on exploitation.”

Defenders overmatched - That kind of constant penetration testing is beyond the scope of most cybersecurity organizations to combat, he said.  “Many organizations run a pen test on maybe an annual basis, but a lot of things change within an application or website in a year,” he said.  “Traditional cybersecurity organizations within companies have not been built for constant self-penetration testing.”

Stytch is attempting to improve upon what McGinley-Stempel said are weaknesses in popular authentication schemes of such as the Completely Automated Public Turing test to tell Computers and Humans Apart, or captcha, a type of challenge-response test used to determine whether a user interacting with a system is a human or an bot.  Captcha codes may require users to decipher scrambled letters or count the number of traffic lights in an image.

Stytch’s technology creates a unique, persistent fingerprint for every visitor.  It claims its software can detect automated visitors such as bots and headless browsers with 99.99% accuracy without requiring user interaction.  A headless browser is a browser without a graphical user interface that is used primarily to speed up automated tasks such as testing but can also be exploited to confuse authentication systems about whether the visitor is a human or a machine.

A recent increase in the percentage of headless browser automation traffic Stytch has detected on customer websites is one indication that bad actors are already using generative AI to automate attacks.  Since the release of GPT-4, the volume of website traffic coming from headless browsers has nearly tripled from 3% to 8%, McGinley-Stempel said.

AI will further diminish the value of captchas, he said.  A combination of generative AI vision and headless browsers can defeat schemes that require visitors to identify objects and images, a popular use case.  Even sophisticated automation detection technology can be foiled by services like Acaptcha Development LP’s Anti-Captcha, which farms out captcha solutions to human workers.  “Putting someone in front of a captcha raises the cost of attack but isn’t necessarily a true test,” he said.

AI arms race - Ultimately, the use of AI and models to solve cybersecurity challenges will be mostly ineffective, he said. “If you’re just going to fight machine learning models on the attacking side with ML models on the defensive side, you’re going to get into some bad probabilistic situations that are not going to necessarily be effective,” he said.  Probabilistic security provides protections based on probabilities but assumes that absolute security can’t be guaranteed.  Stytch, among others are working on deterministic approaches such as fingerprinting, which gathers detailed information about a device or software based on known characteristics and can provide a higher level of certainty that the user is who they say they are.

The most effective preventions enterprises can employ with current technology is a combination of distributed denial of service attack prevention, fingerprinting, multifactor authentication and observability.  The last technique is often overlooked, he said.  “If you embedded our device fingerprinting JavaScript snippet on your website, you’d get a lot of interesting data on what percentage of your traffic was bots, headless browsers and real humans within an hour,”’ he said.  Information technology executives are often alarmed to discover what Imperva Inc. reported earlier this year: Almost half of internet traffic now comes from nonhuman sources.

Oh, By the way - Happy New Year 2025 !!

This article is shared at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC).  For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com    

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://register.gotowebinar.com/register/5378972949933166424

[1] https://siliconangle.com/2024/12/27/ai-agents-may-lead-next-wave-cyberattacks/

[2] https://arxiv.org/abs/2404.08144

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!