Holiday Malverizing

13349536296?profile=RESIZE_400xBelieve it or not, many do their gift shopping AFTER Christmas.  Why?  Because the deals are in plenty.  Cyber shopping is no different, but…….  Seemingly innocent "white pages," including an elaborate Star Wars-themed site, are bypassing Google's malvertising filters, showing up high in search results to lure users to second-stage phishing sites.  Threat actors appear to have found yet another innovative use case for artificial intelligence in malicious campaigns: to create decoy ads for fooling malvertising-detection engines on the Google ADs platform.  The scam involves attackers buying Google Search ADs and using AI to create ad pages with unique content and absolutely nothing malicious about them.  The goal is to use these decoy ADs to then lure visitors to phishing sites for stealing credentials and other sensitive data.[1]

With malvertising, threat actors create malicious ads that are rigged to surface high up in search engine results when people search for a particular product or service. The ads often spoof popular and trusted brands and involve webpages and content that are replicas of the originals but serve instead to redirect users to phishing pages or download an attacker's malware of choice on systems of users who interact with the malicious ads.

See:  https://redskyalliance.org/main/search/search?q=malvertising

While many malvertisement campaigns are targeted at consumers, there have been several recently focused on corporate users as well.  One example is a campaign that sought to distribute the Lobshot backdoor on corporate systems, and another that phished employees at Lowe's.  "We are seeing more and more cases of fake content produced for deception purposes," researchers at Malwarebytes said in a report on the campaign this week.  These so called "white pages," as they are being referred to in the criminal underground, serve as legitimate-looking decoys, or front-end webpages that hide malicious content and activities behind them, according to researchers.  "The content is unique and sometimes funny if you are a real human, but unfortunately a computer analyzing the code would likely give it a green check.”  White pages, incidentally, contrast with "black pages," which are the actual malicious landing pages containing harmful content or malware.

The use of AI to plant decoy content on Google Ads adds a new aspect to malvertising scams, which have seen a large surge in volume recently.  Cyber threat investigators have pinned the increase to Microsoft's decision in 2022 to block macros in Word, Excel, and PowerPoint files downloaded from the Internet a top malware vector for threat actors.  That decision forced attackers to look for other malware distribution vectors, one of which happens to be malvertising, according to researchers.

Though Google and operators of other major online ad distribution networks have been battling against this menace and have gotten better at quickly identifying and removing malvertising content bad actors have consistently managed to remain a step ahead.  A recent study found Amazon to be the most spoofed brand in malvertising campaigns, followed by Rufus, Weebly, NotePad++, and TradingView.

In a report, Malwarebytes provided two examples of AI-generated decoy ADs it spotted recently on Google Ads.  One of the decoy ADs targeted users searching the Internet for the Securitas OneID mobile app, and the other targeted users of the Parsec remote desktop app, which is popular among gamers.  The Securitas OneID scam involved an entirely AI-generated website, complete with AI-generated images of supposed executives of the company.  When Google tries to validate the ad, they will see this cloaked page with unique content and there is absolutely nothing malicious within it.

With the Parsec ad, the threat actors used some creative license of their own to generate a heavily Star Wars-influenced website, replete with references to the Parsec astronomical measurement unit.  The artwork for the website even included several AI-generated Star Wars-themed posters, which while impressive, would likely have suggested to users that the site had nothing to do with the legitimate Parsec app.  It is quite straightforward for a real human to identify much of the cloaked content as just fake fluff. Sometimes, things just do not add up and are simply comical.  Even so, as a cloaking mechanism for a malvertising campaign, the website would have passed Google's validation checks.

This article is shared at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  We provide indicators of compromise information via a notification service (RedXray) or an analysis service (CTAC).  For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com    

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://register.gotowebinar.com/register/5378972949933166424

[1] https://www.darkreading.com/cloud-security/malvertisers-fool-google-ai-generated-decoy-content

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!