Log in
Enquire now
‌

Adversarial machine learning

Adversarial machine learning is a branch of machine learning research focused on the development of secure and robust models through a process of attempting to deceive models using malicious or false inputs.

OverviewStructured DataIssuesContributors

Contents

Is a
Technology
Technology

Technology attributes

Related Industries
Machine learning
Machine learning
Artificial neural network
Artificial neural network
Artificial Intelligence (AI)
Artificial Intelligence (AI)

Other attributes

Child Industry
Clustering
Clustering
Wikidata ID
Q20312394

Neural networks execute tasks such as clustering, classification, association and prediction.An artificial neural network is a computational model which is developed based on iterative exposure to large sets of training data which affects the statistical weights and balances of the model.

Adversarial training entails intentionally incorporating statistical noise into the training data with the initial intent to deceive the model, thus identifying vulnerabilities and ways to improve model robustness and resilience. In the context of machine learning, robustness refers to reliable operation of a system across a range of conditions (including attacks) and resilience refers to adaptable operations and recovery from disruptions (including attacks).

For developers and maintainers of machine learning models, the ultimate goal of incorporating adversarial methods is to train a model to accommodate and process inputs which may be malicious or otherwise differ from a narrow set of expected inputs. For malicious actors, the goal is to identify a vulnerability in the system which allows them to destroy, invalidate, or subvert a machine learning model.

Taxonomy of attacks, defenses, and consequences

In October 2019, the National Institute of Standards and Technology (NIST) released and draft taxonomy and terminology guide for adversarial machine learning.

Taxonomy of Attacks, Defenses, and Consequences in Adversarial Machine Learning

Taxonomy of Attacks, Defenses, and Consequences in Adversarial Machine Learning

Adversarial examples

Adversarial examples are intentionally manipulated data which are fed into a neural network with the intent of deceiving it. An adversarial example is generated by introducing a small perturbation to a sample of known-good training data, such that the newly-generated adversarial example reliably causes undesired behaviors or outputs (ex. consistently mis-classifying images) from a machine learning model.

To simulate real-world malicious behavior against a neural network, adversarial examples often appear indistinguishable from legitimate samples from the training data. Adversarial examples of image or audio data, for example, may look or sound nearly identical to legitimate samples to avoid detection by human observers of the input stream.

An example of adversarial example generation applied to GoogLeNet.

An example of adversarial example generation applied to GoogLeNet.

Adversarial examples of image data can also be generated by printing images on paper, and then taking a photo of the resulting image printed onto a piece of paper.In addition to these real-world methods, there are open source software tools which can be used to generate adversarial examples.

Timeline

No Timeline data yet.

Further Resources

Title
Author
Link
Type
Date

'How neural networks learn' - Part II: Adversarial Examples

https://www.youtube.com/watch?v=4rFOkpI0Lcg

Web

January 11, 2018

A taxonomy and terminology of adversarial machine learning

Elham Tabassi, Kevin J. Burns, Michael Hadjimichael, Andres D. Molina-Markham, Julian T. Sexton

https://doi.org/10.6028/NIST.IR.8269-draft

October 30, 2019

Adversarial Attacks on Neural Networks - Bug or Feature?

https://www.youtube.com/watch?v=AOZw1tgD8dA

Web

September 10, 2019

Adversarial Machine Learning

https://www.youtube.com/watch?v=jZYPo_od-Sk

Web

November 20, 2019

Adversarial Machine Learning -- Industry Perspectives

Ram Shankar Siva Kumar, Magnus Nyström, John Lambert, Andrew Marshall, Mario Goertzel, Andi Comissoneru, Matt Swann, Sharon Xia

https://arxiv.org/pdf/2002.05646.pdf

PDF

February 4, 2020

References

Find more entities like Adversarial machine learning

Use the Golden Query Tool to find similar entities by any field in the Knowledge Graph, including industry, location, and more.
Open Query Tool
Access by API
Golden Query Tool
Golden logo

Company

  • Home
  • Press & Media
  • Blog
  • Careers
  • WE'RE HIRING

Products

  • Knowledge Graph
  • Query Tool
  • Data Requests
  • Knowledge Storage
  • API
  • Pricing
  • Enterprise
  • ChatGPT Plugin

Legal

  • Terms of Service
  • Enterprise Terms of Service
  • Privacy Policy

Help

  • Help center
  • API Documentation
  • Contact Us
By using this site, you agree to our Terms of Service.