Machine Learning (ML) can be used for both Good and Evil

8083706282?profile=RESIZE_400xMicrosoft, in collaboration with MITRE, IBM, NVIDIA, and Bosch, has released a new open framework that aims to help security analysts detect, respond to, and remediate adversarial attacks against machine learning (ML) systems.  Called the Adversarial ML Threat Matrix, the initiative is an attempt to organize the different techniques employed by malicious adversaries in subverting ML systems.

Just as artificial intelligence (AI) and ML are being deployed in a wide variety of novel applications, threat actors can not only abuse the technology to power their malware but can also leverage it to fool machine learning models with poisoned datasets, thereby causing beneficial systems to make incorrect decisions, and pose a threat to stability and safety of AI applications.

ESET (an Internet security and antivirus company) researchers last year found Emotet, a notorious email-based malware behind several botnet-driven spam campaigns and ransomware attacks to be using ML to improve its targeting.  Then earlier this month, Microsoft warned about a new Android ransomware strain that included a machine learning model that, while yet to be integrated into the malware, could be used to fit the ransom note image within the screen of the mobile device without any distortion.

In addition, researchers have studied what is called model-inversion attacks, where access to a model is abused to infer information about the training data.

According to a Gartner report cited by Microsoft, 30% of all AI cyberattacks by 2022 are expected to leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems.  "Despite these compelling reasons to secure ML systems, Microsoft's survey spanning 28 businesses found that most industry practitioners have yet to come to terms with adversarial machine learning," a Microsoft researcher stated. "Twenty-five out of the twenty-eight businesses indicated that they do not have the right tools in place to secure their ML systems."

Adversarial ML Threat Matrix hopes to address threats against weaponization of data with a curated set of vulnerabilities and adversary behaviors that Microsoft and MITRE vetted to be effective against ML systems.  The idea is that companies can use the Adversarial ML Threat Matrix to test their AI models' resilience by simulating realistic attack scenarios using a list of tactics to gain initial access to the environment, execute unsafe ML models, contaminate training data, and exfiltrate sensitive information via model stealing attacks.[1]

"The goal of the Adversarial ML Threat Matrix is to position attacks on ML systems in a framework that security analysts can orient themselves in these new and upcoming threats," the Microsoft spokesperson said.

"The matrix is structured like the ATT&CK framework, owing to its wide adoption among the security analyst community this way, security analysts do not have to learn a new or different framework to learn about threats to ML systems."

The development is the latest in a series of moves undertaken to secure AI from data poisoning and model evasion attacks.  It is worth noting that researchers from John Hopkins University developed a framework titled TrojAI designed to thwart Trojan attacks, in which a model is modified to respond to input triggers that cause it to infer an incorrect response.

Red Sky Alliance has been has analyzing and documenting cyber threats and vulnerabilities for over 9 years and maintains a resource library of malware and cyber actor reports.

The installation, updating and monitoring of firewalls, cyber security and proper employee training are keys to blocking attacks.  Please feel free to contact our analyst team for research assistance and Cyber Threat Analysis on your organization.

Red Sky Alliance is   a   Cyber   Threat   Analysis   and   Intelligence Service organization.  For questions, comments or assistance, please contact the lab directly at 1-844-492-7225, or feedback@wapacklabs.com  

Weekly Cyber Intelligence Briefings: 

https://attendee.gotowebinar.com/register/8782169210544615949

 

 TR-20-301-002_MachineLearning.pdf

 

[1] https://thehackernews.com/2020/10/adversarial-ml-threat-matrix.html

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!