A Robust Framework for Adaptive Selection of Filter Ensembles to Detect Adversarial Inputs
Abstract
Existing defense strategies against adversarial attacks (AAs) on AI/ML are primarily focused on examining the input data streams using a wide variety of filtering techniques. For instance, input filters are used to remove noisy, misleading, and out-of-class inputs along with a variety of attacks on learning systems. However, a single filter may not be able to detect all types of AAs. To address this issue, in the current work, we propose a robust, transferable, distribution-independent, and cross-domain supported framework for selecting Adaptive Filter Ensembles (AFEs) to minimize the impact of data poisoning on learning systems. The optimal filter ensembles are determined through a Multi-Objective Bi-Level Programming Problem (MOBLPP) that provides a subset of diverse filter sequences, each exhibiting fair detection accuracy. The proposed framework of AFE is trained to model the pristine data distribution to identify the corrupted inputs and converges to the optimal AFE without vanishing gradients and mode collapses irrespective of input data distributions. We presented preliminary experiments to show the proposed defense outperforms the existing defenses in terms of robustness and accuracy.
Publication Title
Proceedings - 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshop Volume, DSN-W 2022
Recommended Citation
Roy, A., & Dasgupta, D. (2022). A Robust Framework for Adaptive Selection of Filter Ensembles to Detect Adversarial Inputs. Proceedings - 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshop Volume, DSN-W 2022, 59-67. https://doi.org/10.1109/DSN-W54100.2022.00019