29, Machine Learning Gone Bad

12150944265?profile=RESIZE_400xTo make the Machine Learning (ML) model learn the wrong thing, adversaries can target the model’s training data, foundational models, or both. Adversaries exploit this class of vulnerabilities to influence models using data and parameter manipulation methods, which practitioners term poisoning. Adversaries can cause ML to reveal the wrong thing. In this class of vulnerabilities, an adversary uses an ML model to reveal some aspect of the training dataset that the creator did not intend to reveal.

WEEKLY WEBINAR REDSHORT REGISTRATION

Register for our LIVE REDSHORT. webinars and never miss our weekly broadcast. RED' as in something important from Red Sky Alliance and 'SHORT' as in 10 minutes or less. We will cover highlights of trending topics.

REGISTER HERE

Comments

You need to be a member of Red Sky Alliance to add comments!

Comments are closed.

Topics by Tags

Monthly Archives