Klarreich, Erica. "Learning Securely." Communications of the ACM 59.11 (2016): 12-14. Web.
Right now one of the most common uses for machine learning is to identify pictures. Facebook uses something like this to tag your face in pictures automatically, and self-driving cars will eventually use this to identify road signs. In her paper Klarreich raises the serious concern that these new machine learning algorithms are still pretty naive, and hackers can trick them fairly easily. Using a model of the algorithm a hacker can make adversarial inputs. “...for instance, one that forced a deep neural network to classify what humans see as a “stop” sign instead as a “yield” sign. (Klarreich 2) These adversarial inputs can be used to train the machine just like other inputs, though, and can effectively act like an immunization to real hacking attempts.
Right now one of the most common uses for machine learning is to identify pictures. Facebook uses something like this to tag your face in pictures automatically, and self-driving cars will eventually use this to identify road signs. In her paper Klarreich raises the serious concern that these new machine learning algorithms are still pretty naive, and hackers can trick them fairly easily. Using a model of the algorithm a hacker can make adversarial inputs. “...for instance, one that forced a deep neural network to classify what humans see as a “stop” sign instead as a “yield” sign. (Klarreich 2) These adversarial inputs can be used to train the machine just like other inputs, though, and can effectively act like an immunization to real hacking attempts.
With a few tweaks the program is tricked with almost certainty.