Adversarial Attacks and Defenses for Deployed AI Models

Abstract

With the surge in the adoption of AI/ML techniques in industry, adversarial challenges are also on the rise and defense strategies need to be configured accordingly. While it is crucial to formulate new attack methods (similar to Fuzz testing) and devise novel defense strategies for coverage and robustness, it is also imperative to recognize who is responsible for implementing, validating, and justifying the necessity of AI/ML defenses. In particular, which components of the learning system are vulnerable to what type of adversarial attacks, and the expertise needed to realize the severity of such adversarial attacks. Also, how to evaluate and address the adversarial challenges to recommend defense strategies for different applications. We would like to open a discussion on the skill set needed to examine and implement various defenses for emerging adversarial attacks.

Publication Title

IT Professional

Share

COinS