arrow-backBACK
Blog Home / Guest Post

Influence Attacks on Machine Learning

By April 01, 2020

  • facebook
  • twitter
  • pinterest
  • linkedin

Mark Sherman
Technical Director, Cyber Security Foundations, CERT Division
Carnegie Mellon University Software Engineering Institute

Overview

Mark Sherman explains how deep learning is playing an increasing role in developing new applications and how adversaries can attack machine learning systems in a variety of ways.

Transcription

Hello, this is Mark Sherman from the CERT division with your SEI CyberMinute. Techniques such as multi-level neural nets, or more commonly deep learning, are playing an increasing role in developing new applications. Users need to trust that these systems are operating correctly. Intuitively, one can imagine systems being fooled by visually similar images of wildly different objects.

https://youtu.be/zgAzCVk3qgQ
Watch the full video here.

The muffins and Chihuahua meme produced by Karen Zack, also known as Teeny Biscuit, is compelling to the human eye. But findings reported by researchers at Chubu University in Japan demonstrate machine learning systems can be fooled by as few as one changed pixel in an image. Adversaries can attack learning systems in a variety of ways.

Using the taxonomy suggested by Reno, et al, influence represents one axis of attacker capability. The influence can manifest itself in several different ways. First, the attack could attempt to influence the training data. Incorrect labeling or clustering can result in mislabeled results when deployed.

Second, the attack could attempt to influence the evaluation or test data. The result could be overconfidence of a poorly working system or distrust of a working system. Third, the attack could attempt single poisoning, such as the selective change of input data to the working system. Changing a single pixel can effectively camouflage an image to the machine learning system.

Fourth, the attack could attempt boiling frog poisoning. Active learning systems adjust their behavior based on the inputs they receive. They naturally drift to improve accuracy. In this attack, an adversary provides data that are at the edge of the discrimination algorithm and slowly move the edge where the drift results in incorrect conclusions.

Many organizations such as the SEI, are engaged in research to mitigate these attacks in support of the defense science board's recommendation for independent verification and validation for machine learning. Thanks for watching this SEI CyberMinute. For more information, please visit our website or send me an email at [email protected].

Recent Posts

https://ai4.io/blog/2023/12/12/developing-computer-vision-applications-in-data-scarce-environments/
https://ai4.io/blog/2022/02/25/5-effective-risk-management-strategies-when-trading-in-crypto/
https://ai4.io/blog/2022/02/15/how-ai-will-be-impacted-by-the-rise-of-nfts/