DeepLocker – An AI Powered Malware

DeepLocker is a class of malware that use AI (Artificial Intelligence) to infect a victim’s system.  DeepLocker was developed and launched by an IBM research group.[1]  Their concept is artificial intelligence can automatically detect and combat malware to effectively stop cyber-attacks before they impact an organization.  This positive concept can now theoretically be used in reverse and weaponized by bad actors.  This to power a new generation of malware that can evade even the best cyber-security defenses and infect a computer network or launch an attack; even when with strong two-party authentication. 

Infection Technique

To demonstrate this scenario, white hat security researchers developed DeepLocker.  DeepLocker is in a new class of highly targeted and evasive attack tools, powered by AI.  DeepLocker conceals its malicious intent until it reaches a targeted, specific victim.   DeepLocker travels over a network without being detected and deploys its malicious action as soon as the AI model identifies a target through precise indicators.

What is unique about DeepLocker, is that the use of AI enables “trigger conditions” to unlock an attack.  AI makes DeepLocker almost impossible to reverse engineer.  The malicious payload can only be unlocked if the intended target is reached.  It accomplishes this action by employing a deep neural network (DNN) AI model.  A Deep Neural Network (DNN) model orders "trigger conditions" to execute a payload.  If the trigger condition is not met and the target not found, the malware remains locked.  When attempting to infiltrate a target with this type surreptitious malware needs to conceal two main components: the trigger condition(s) and the attack payload.

AI models are trained to behave logically, unless it is presented with a specific input, such as the trigger conditions identifying a specific victim.  The neural network produces the “key” needed to unlock the attack.  DeepLocker can leverage several two factor authentication features to identify its target; to include visual, audio, geolocation and/or other system-level features.  As it is virtually impossible to exhaustively enumerate all possible trigger conditions for the AI model, this method would make it extremely challenging for malware analysts to reverse engineer the neural network and recover the mission-critical secrets, to include the attack payload and the specifics of the target.  

DeepLocker is able to leverage the “black-box” nature of the DNN AI model to conceal the trigger condition.  A simple “if else” trigger condition is included into a deep convolutional network of the AI model, it is then very hard to decipher.  This model is also able to convert the concealed trigger condition itself into a “password” or “key,” which is required to unlock the attack payload.

This method allows three layers of attack concealment.    

  • Target Class Concealment - faces or some other visual clues
  • Target Instance Concealment - use a specific target class to validate a trigger condition
  • Malicious Intent Concealment - identify the goal of the attack payload

This malware can hide its malicious payload in a carrier application, like video conferencing software, to avoid detection by most antivirus and malware scanners until it reaches specific victims.  The victims are then identified via indicators such as voice recognition, facial recognition and other system-level features, which trigger the action.   The technique of delivery to the intended target is what makes DeepLocker the stealthiest and the latest ransomware attack.

Infection Capability

The payload can be any malicious piece of code, ranging from a simple malware corrupting your data to the most sophisticated ransom.  To demonstrate DeepLocker’s capability, the IBM researchers presented a variant of WannaCry-Ransomware that previously paralyzed worldwide corporate networks.  DeepLocker was sent in a video conferencing app so that it would remain undetected by security tools, which included antivirus engines and malware sandboxes.  Their example employed a built-in triggering condition.  DeepLocker did not unlock and execute WannaCry on the system until it recognized the face of the target; which in their demonstration was the real CEO.   The trigger photo used in the demo matched publicly available photos of the real CEO.  The IBM presentation exposed DeepLocker as a highly targeted and evasive tool.  Think of this capability as similar to a sniper attack versus a combat “spray and pray” approach of traditional malware attacks. DeepLocker avoids detection until the precise moment it recognizes a specific target.  Think of the implication against government, civilian or commercial targets. 

Mitigation and Prevention Strategy

Conventional cyber security techniques are bypassed in countering DeepLocker attacks.  Since the attack leaves no foot print and generates no signatures of malicious intent, no current intrusion prevention system can safe guard against such a threat.  An attack is executed based on identity-based triggers, so precautionary measures may be adopted in terms of identity concealment, hiding or counter spoofing.  Users should employ utmost care to not leave behind any identity-based matrix or activity.  This includes logging out of social media platforms when not in use, sealing laptop and cell phone cameras and avoiding any other facial identification tools.  Since this concept is in developing stages, further mitigation care will be provided based on research of the DeepLocker type malware techniques.

Conclusion

DeepLocker has changed the game of malware evasion by taking a fundamentally different approach from other current evasive and targeted malware.  DeepLocker hides its malicious payload in benign carrier applications, such as a video conference software, to avoid detection by most antivirus and malware scanners.  It is currently a challenge to fully comprehend how the DeepLocker researchers developed their trigger decisions for this malware demonstration, which makes malware analysis difficult; as the program logic is not visible by analyzing the sent code.  DeepLocker also demonstrates that new AI-based criteria, such using facial recognition or voice verification, are even available to attackers to identify their target.  Consequences of these DeepLocker innovations must also be put into perspective.  It is actually about malware evading analysis, which is nothing new.  Wapack Labs senior researchers have been tracking this issue for several years: Threat Actors using AI and machine learning (ML).  Jeff Stutzman, CIO of Wapack Labs, wrote about “Swarming” in a January 2017 blog.[2]  

AI/ML is going to be a huge attack advantage once threat actors get legitimate cyber experts involved in research.   Wapack Labs believes nation state actors are likely well down the path of weaponizing AI/ML.

For questions or comments regarding this report, please contact Wapack Labs at 603-606-1246, or feedback@wapacklabs.com

Other Sources:

[1] https://www.blackhat.com/us-18/briefings/schedule/index.html#deeplocker---concealing-targeted-attacks-with-ai-locksmithing-11549

[2] https://henrybasset.blogspot.com/2017/01/botnets-swarms-operating-at-scale.html

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!