State of the art: Security Testing of Machine Learning Development Systems
Abstract
In recent days, machine learning (ML) systems have become integral to nearly all mainstream applications. Understanding the underlying logic that contributes to the desired behavior in ML systems can be challenging. Humans play a crucial role in providing the necessary data samples for training ML models to ensure accurate predictions. The complexity of ML testing arises from various factors such as data dependency, dynamic model behavior, the absence of a test oracle, a vast input space, and the lack of a specific testing life cycle tailored for ML. Testing ML systems is not a straightforward process, as it not only involves verifying the code and their corresponding data but also is further complicated by the dynamic nature of these systems. A typical ML model undergoes seven stages, including eliciting business needs/requirements, gathering data, selecting a model, training the model, testing the model, deploying the model, and monitoring the model. Throughout the ML Development Life Cycle (MLDLC), each stage introduces various security risks. This paper explores the current state of the literature on security attacks and defense approaches concerning data, model, and prediction output of ML models. The paper also addresses potential security attacks at each stage of the MLDLC and the corresponding security measures to mitigate those attacks.
Publication Title
2024 IEEE 14th Annual Computing and Communication Workshop and Conference, CCWC 2024
Recommended Citation
Das, S., Krishnamurthy, B., Das, R., & Shiva, S. (2024). State of the art: Security Testing of Machine Learning Development Systems. 2024 IEEE 14th Annual Computing and Communication Workshop and Conference, CCWC 2024, 534-540. https://doi.org/10.1109/CCWC60891.2024.10427598