To make the Machine Learning (ML) model learn the wrong thing, adversaries can target the model’s training data, foundational models, or both. Adversaries exploit this class of vulnerabilities to influence models using data and parameter manipulation methods, which practitioners term poisoning. Poisoning attacks cause a model to incorrectly learn something that the adversary can exploit at a future time. For example, an attacker might use data poisoning techniques to corrupt a supply chain fo
attack (3)
Cloudflare has recently released their Q1 DDoS threat report [5]. Thus, this is a good point for a discussion on DDoS attacks and some of the newer techniques involved with them. First, we’ll get a little bit of a refresher on what DDoS attacks are, how they manifest and how things look when a service is being attacked, and how they can be detected. From there, we’ll go into the typical mechanics of how a DDoS attack takes place and what sort of techniques and methods tend to be involved. Th