To make the Machine Learning (ML) model learn the wrong thing, adversaries can target the model’s training data, foundational models, or both. Adversaries exploit this class of vulnerabilities to influence models using data and parameter manipulation methods, which practitioners term poisoning. Adversaries can cause ML to reveal the wrong thing. In this class of vulnerabilities, an adversary uses an ML model to reveal some aspect of the training dataset that the creator did not intend to reveal.
29, Machine Learning Gone Bad
Posted by Jim McKee on July 19, 2023 at 4:01pm
Comments