Cybercriminals have been using AI-generated voice messages to impersonate high-ranking US government officials in an ongoing effort to breach the online accounts of current and former officials, the FBI has warned. The FBI is issuing this announcement to warn and provide mitigation tips to the public about an ongoing malicious text and voice messaging campaign.
“Since April 2025, malicious actors have impersonated senior US officials to target individuals, many of whom are current or former senior US federal or state government officials and their contacts. If you receive a message claiming to be from a senior US official, do not assume it is authentic,” the FBI advised. According to the agency, the campaign primarily targets current and former US federal or state government officials, as well as their associates. Once attackers gain access to a victim’s information, they can use it to impersonate additional officials or acquaintances, thereby expanding their reach.
See: https://redskyalliance.org/xindustry/vishing-the-voice-scam
AI-generated voice calls have been used in several high-profile attacks. In 2024, an executive at Ferrari thwarted a similar attack by questioning the impersonator about a book they had previously recommended. A British engineering firm, Arup, fell victim to scammers, paying out $25 million after fraudsters set up a false video call meeting to trick an employee. Similarly, in 2019, a UK energy company suffered a loss of more than £200,000 due to AI-generated phone calls.
The FBI explained that these "smishing" (SMS phishing) or "vishing" (voice phishing) attacks rely on AI tools to generate realistic voices. "One way the actors gain such access is by sending targeted individuals a malicious link under the guise of transitioning to a separate messaging platform," the FBI stated. Once a victim’s account is compromised, it can be exploited for further attacks, making the scam increasingly dangerous.
The FBI highlighted that scammers use software to generate phone numbers that are not attributed to a specific device. To stay protected, individuals should:
- Independently verify the identity of the caller through research.
- Check the caller’s correct number before responding.
- Scrutinize messages for inconsistencies before sharing any information.
When assessing videos or images for AI manipulation, experts recommend looking for subtle imperfections, such as distorted hands or feet, blurred facial features, incorrect shadows, unnatural speech synchronization, and other irregular movements.
While these measures can help identify fraudulent content, the agency warned that AI-generated material has become so advanced that it is often difficult to detect.
The FBI advised individuals to create a secret word or phrase to verify identity when communicating online. Additionally, people should:
- Avoid clicking on unfamiliar links or email attachments.
- Never send money, gift cards, or cryptocurrency to someone over the Internet or phone unless the recipient’s identity has been thoroughly verified.
- If in doubt, call the party back on a different verified telephone number.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com
Weekly Cyber Intelligence Briefings:
- Reporting: https://www. redskyalliance. org/
- Website: https://www. redskyalliance. com/
- LinkedIn: https://www. linkedin. com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://attendee.gotowebinar.com/register/5504229295967742989
https://www.cybersecurityintelligence.com/blog/fbi-warns-of-surging-use-of-vishing-8461.html
Comments