Applicability issues of Evasion-Based Adversarial Attacks and Mitigation Techniques

Abstract

Adversarial attacks are considered security risks for Artificial Intelligence-based systems. Researchers have been studying different defense techniques appropriate for adversarial attacks. Evaluation strategies of these attacks and corresponding defenses are primarily conducted on trivial benchmark analysis. We have observed that most of these analyses have practical limitations for both attacks and for defense methods. In this work, we analyzed the adversarial attacks based on how these are performed in real-world problems and what steps can be taken to mitigate their effects. We also studied practicability issues of well-established defense techniques against adversarial attacks and proposed some guidelines for better and effective solutions. We demonstrated that the adversarial attacks detection rate and destruction rate co-related inversely, which can be used in designing defense techniques. Based on our experimental results, we suggest an adversarial defense model incorporating security policies that are suitable for practical purposes.

Publication Title

2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020

Share

COinS