Machine Learning application lifecycle augmented with explanation and security

Abstract

We have developed a Distributed Denial of Service (DDoS) intrusion detection framework that employs ML ensembles of both supervised and unsupervised classifiers that are complementary in reaching a corroborated classification decision. Our work has been limited to DDoS attack detection techniques. We propose to extend our framework to general ML system development, based on our review of current ML system development life cycles. We also propose to augment the general life cycle model to include security features to enable building security-in as the development progresses and bolt security-on as flaws are discovered after deployment. Most ML systems today operate in a black-box mode, providing users with only the predictions without associated reasoning as to how the predictions are brought about. There is heavy emphasis now to build mechanisms that help the user develop higher confidence in accepting the predictions of ML systems. Such explainability feature of ML model predictions is a must for critical systems. We also propose to augment our lifecycle model with explainability features. Thus, our ultimate goal is to develop a generic ML lifecycle process augmented with security and explainability features. Such an ML lifecycle process will be of immense use in ML systems development for all domains.

Publication Title

2021 IEEE 12th Annual Ubiquitous Computing, Electronics and Mobile Communication Conference, UEMCON 2021

Share

COinS