Types of Adversarial Attacks

types of adversarial attacks

AI has made possible many tasks that couldn’t be thought of being carried out without human intervention like self-driven cars, voice-command-based devices, etc. These technologies have reduced human interaction and improved the efficiency of the work that was carried out manually. But as its usage has increased over the years, it has caught the attention of hackers and attackers. They have made AI vulnerable to many adversarial attacks that can have disastrous results.

Adversarial attacks are basically attacks that deceive AI systems that lead them to make mistakes. These attacks can be carried out on various AI systems like self-driven vehicles, home-assistants, search engines, election surveys, etc. Adversarial attack on AI is a serious issue and should be addressed and better defense mechanisms should be developed.

So, let’s look at the types of adversarial attacks on AI

White-Box attack

In a White-Box attack, attackers very well know the algorithm of the AI system they are attacking. The attacker has details related to the working mechanism of the machine and the type of data it has. They have also the model architecture and have access to the building code and can modify them. So, using this knowledge the attacker can directly interact with the target device and distract from its original goal.

Black-Box attack

In this type of adversarial attack, the attacker has no knowledge of algorithms and working mechanisms of the target device. Similarly they have no idea of the model architecture, data, and the building code of the target AI system. This attack is performed by executing queries against the target, and analyzing the resulting changes and outputs, and try to build a copy of the target device using that data. Finally, after creating a copy or simulator of the target device, White-Box attacks are carried out.

Confidentiality attacks

In confidentiality attacks, the data, and algorithms used to develop and train an AI system is leaked. Further, this leaked data can be used by many other people and affect the original device or create a copy of the original system.

Integrity attacks

In integrity attacks, the data, and algorithms used to train an AI system is tempered that cause the AI system to behave differently. This adversarial attack is used basically in scenarios like avoiding malware detection, to discredit the product or company, to bypass network anomaly detection, etc. The most common example is the poisoning of the search engine auto-complete functionality, that can defame a product or company.

Availability attack

Availability attacks are the attacks in which the AI system is modified in such a way that it seems normal to a human, but works completely different for the machine. In this type of adversarial attack, the input is modified or input is given by the attacker such that the machine works according to the forged inputs. An example of an availability attack is hijacking self-driven cars and drones.

Types of Adversarial Attacks

All you need to know about Artificial Intelligence

Learn Artificial Intelligence

Top 7 Artificial Intelligence University/ Colleges in IndiaTop 7 Training Institutes of Artificial Intelligence
Top 7 Online Artificial Intelligence Training ProgramsTop 7 Certification Courses of Artificial Intelligence

Learn Artificial Intelligence with WAC

Artificial Intelligence WebinarsArtificial Intelligence Workshops
Artificial Intelligence Summer TrainingArtificial Intelligence One-on-One Training
Artificial Intelligence Online Summer TrainingArtificial Intelligence Recorded Training

Other Skills in Demand

Artificial IntelligenceData Science
Digital MarketingBusiness Analytics
Big DataInternet of Things
Python ProgrammingRobotics & Embedded System
Android App DevelopmentMachine Learning