model extraction attack (1)

12095057065?profile=RESIZE_400xTo make the Machine Learning (ML) model learn the wrong thing, adversaries can target the model’s training data, foundational models, or both.  Adversaries exploit this class of vulnerabilities to influence models using data and parameter manipulation methods, which practitioners term poisoning.  Poisoning attacks cause a model to incorrectly learn something that the adversary can exploit at a future time.  For example, an attacker might use data poisoning techniques to corrupt a supply chain fo