Cybercriminals are exploiting the implicit trust users place in central Artificial Intelligence (AI) platforms to distribute the Atomic macOS Stealer (AMOS). A new campaign, identified by security researchers at Huntress, reveals that attackers have evolved beyond simply mimicking trusted brands; they are now actively exploiting legitimate services from OpenAI and xAI to host malicious payloads. The campaign highlights a significant shift in social engineering tactics. Rather than directing victims to compromised websites or spoofed domains, attackers are leveraging ChatGPT's and Grok's "shared conversation" features to bypass security filters and user suspicions.
The attack begins with a routine Google search. Users querying common troubleshooting phrases, such as “Clear disk space on macOS,” are presented with high-ranking results. Unlike traditional SEO poisoning, which typically directs traffic to malicious domains, these results lead to valid, shareable conversation links hosted on legitimate domains: `chatgpt.com` and `grok.com`. By poisoning search results with links to reputable AI platforms, attackers ensure their malicious instructions appear as the primary solution for IT problems.[1]
Once a victim clicks the link, they are presented with a professional-looking troubleshooting guide. Although hosted on a legitimate AI platform, the conversation has been pre-generated by the attacker to include harmful instructions. The guide directs the user to open the macOS Terminal and copy and paste a specific command, ostensibly to “safely clear system data.” Because the advice appears to originate from a trusted AI assistant, users frequently bypass standard security checks. The executed command does not download a traditional file, which allows it to evade macOS Gatekeeper warnings. Instead, it executes a base64-encoded script that retrieves a variant of the AMOS stealer.
According to Huntress’ research, the malware employs "living off the land" techniques to operate stealthily. Once the script is running, it uses the native macOS `dscl` utility to silently validate the user’s password in the background, harvesting credentials without triggering a graphical prompt or alerting the victim.
This shift to using trusted AI domains as hosting infrastructure creates a difficult challenge for network defenders. Security teams cannot simply block traffic to major AI platforms without disrupting business operations.
Legitimate AI platforms have safeguards to prevent the generation of malicious code; however, attackers are bypassing these safeguards by manually crafting text that appears to be helpful advice and sharing the link rather than asking the AI to generate the malware directly. Ultimately, this campaign demonstrates that technical defenses cannot always prevent a user from executing harmful commands if they believe the source is trustworthy.
This article is shared at no charge for educational and informational purposes only.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. We provide indicators-of-compromise information via a notification service (RedXray) or an analysis service (CTAC). For questions, comments, or assistance, please contact the office directly at 1-844-492-7225 or feedback@redskyalliance.com
- Reporting: https://www.redskyalliance.org/
- Website: https://www.redskyalliance.com/
- LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://register.gotowebinar.com/register/5207428251321676122
[1] https://www.cybersecurityintelligence.com/blog/hackers-weaponising-chatgpt-and-grok-to-deliver-malware-8960.html
Comments