Cybersecurity has always been a race between cybercriminals and defenders. Defense against attackers will improve to adapt to new threats, and then attackers respond by refining their tactics to find the next vulnerability in the defense. It's one of the most dynamic environments in the world of computer science.
One of the most successful and increasingly prevalent ways of attack has come from social engineering, which is when criminals manipulate humans directly to gain access to confidential information. Social engineering is more sophisticated than ever, and its most advanced use of targeted deepfakes.[1]
Deepfakes is a combination of "deep learning" and "fake." The "deep learning" part references the deep learning that occurs with the help of AI and ML algorithms. Using AI/ML, deepfakes can generate audio, video, or photographic content that imitates real people—and they can do so with frightening accuracy.
See: https://redskyalliance.org/xindustry/deep-fakes-in-cyber
Originally, the technology gained its reputation from its use in entertainment and media. Fake YouTube and TikTok videos are already a common sighting. Its implications for cybersecurity are much more alarming. Cybercriminals have been quick to recognize and take advantage of these new capabilities, which has given birth to a new epoch of phishing called "deepfake phishing."
The way traditional phishing works is rather simple. The phisher sends fake emails that attempt to seem legitimate to lure victims into giving them sensitive information such as login credentials or financial information. Traditionally, this involves using scare tactics to bypass the user's rational mind and emotionally manipulate them into action without them second-guessing the authenticity of the request.
The reason deepfake phishing is so effective is because it increases this emotional manipulation. The deepfaked material is so accurate that it's able to catch more people off guard and make it that much easier to bypass their rational minds. Consider yourself receiving a video call from your CEO, complete with all of his/her familiar gestures and tone of voice, asking you to access certain data on the company network. It sounds like science fiction. It is not hard to see how devastating such a scenario could be if replicated.
The higher quality of deepfake footage will be more of problem by the growth of the AI industry. At the mention of AI, most cybersecurity experts get excited about threat detection, automated incident reports, and easy discovery of polymorphic code. The fact that deepfake phishing will require next to no effort to code, thanks to AI, is a big problem. Today, being a successful "black hat" takes a lot of effort. To earn a profit, criminals must attempt to breach a company's internal software, such as JavaScript DOCX editor, which is often difficult because of a lack of server-side conversions and no versioning issues. And even if they managed to find a weak point, it is entirely a different matter to actually exploit it.
Consider using deepfake content instead. With a few photos or voice clips and a subscription to AI tools, hackers will be able to, for example, jump on a video call with a company's CFO to authorize a large payment to a fraudulent account with ease. No skills are needed, and everything seems 100% legitimate to the victim. A photo can be downloaded from a social media website and a voice sample can be obtained from a voice mail greeting without breaking a law or requiring any special talent.
Possibly the biggest strength of using deepfakes for phishing is the ability to bypass conventional security measures. Most modern cybersecurity systems are geared against malware, ransomware, and brute-force attacks. Email filters have a chance at blocking traditional phishing attempts, but they're not equipped to handle a legitimate seeming video call if it seems to originate from a trusted source. What is worse is that the human factor plays such a huge role here. The truth is, technology is limited by human activity. While it can aid us in detecting deepfakes, in the end, it comes down to the person in front of the computer to make the right choices.
In order to combat the threat of deepfakes, there will need to be many improvements in cybersecurity that combine technology with training and procedure changes. Relying on technology alone is not enough.
Businesses will have to change their practices to adjust to the new threat such as:
- Training and awareness: Companies should conduct regular training sessions to educate their employees about deepfakes and the risks involved. This includes the signs of a deepfake phishing attempt, and protocols for scrutinizing communications, even if they appear genuine on the surface.
- Multi-factor authentication (MFA): If an employee receives a suspicious request, MFA can save the day if used correctly. This way, even if a phisher tricks an employee with their deepfake, they will still need additional verification to continue, which will deter attacks.
- Communication protocols: Anytime there is a sensitive request, there should be a protocol to minimize any risk for it. One way to accomplish this could be to have a rule that financial transfers or data-sharing requests above a certain confidentiality level must always be verified through a secondary source, especially an offline method like a phone call. And if you are on a Zoom call, ask the requestor to turn their head to a profile view. Game Over!
See: https://redskyalliance.org/xindustry/how-to-spot-a-deepfake-it-s-easy
The threat of deepfake phishing is not just theoretical. There are in fact already notable cases that showcase the real-world implications of its existence. One example of this is an incident involving a Brazilian crypto exchange, BlueBenx, which was effectively ruined by criminals using AI to impersonate Binance COO Patrick Hillmann. They were scammed into sending $200,000 and 25 million BNX tokens, all because of a convincing Zoom call.
If scammers can fool a crypto exchange, despite all the safety features involved, they can fool anyone. Incidents like these should be a bright red warning sign and serve as a wake-up call for any businesses not privy to this threat.
According to research, the global AI market is expected to balloon to more than $300 billion by 2025, and a large part of that will be cybersecurity companies providing AI-driven deepfake-busting software. White hats are already working on defensive algorithms that will be able to detect artificial videos, pinpoint anomalies, and even track the source and maker of the deepfake content. But that is in tomorrow’s world; today, businesses must fend for themselves.
Here are a few ways to stay proactive as deepfake phishing is becoming more sophisticated:
- Invest in research and development: Companies should consider investing in R&D specifically targeted at deepfake detection. A great idea is to collaborate with academic institutions or tech startups as innovation often comes from these sources.
- Open source collaboration: The cybersecurity community benefits from shared knowledge. Open source projects, where professionals from around the world collaborate, can lead to breakthroughs in deepfake detection.
- Regularly update policies: Cybersecurity policies should be living documents, frequently updated to reflect the latest threats and best practices. This cannot be stressed highly enough.
- Stay informed: Cyber threats are ever-evolving. Attend seminars, workshops, and conferences, and encourage others in your company to do the same so you can stay ahead of the curve.
Cybercriminals are smart, innovative and adaptable, which is why they are successful. Deepfake phishing is simply their newest way of deploying their scams. Organizations and individuals are not helpless against them. If they make use of technological advances and strengthen their protocols to fill in the gaps in human psychology, they can prevent such attacks from occurring. And to use common sense. By understanding the threat and investing in measures to help them get ahead of the curve, and ensuring their work culture emphasizes awareness and skepticism, they can truly put a dent in mitigating the success of these attacks.
This article is presented at no charge for educational and informational purposes only.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization and has reported extensively on AI technology. For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@redskyalliance.com
Weekly Cyber Intelligence Briefings:
- Reporting: https://www.redskyalliance.org/
- Website: https://www.redskyalliance.com/
- LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://attendee.gotowebinar.com/register/5993554863383553632
[1] https://www.secureworld.io/industry-news/social-engineering-deepfake-phishing
Comments