DDoS Explainer using Interpretable Machine Learning


Machine learning (ML) experts have been using black-box classifiers for modeling purposes. However, the users of these systems are raising questions about the transparency of the predictions of the models. This lack of transparency results in non-acceptance of the predictions, especially in critical applications. In this paper, we propose a DDoS explainer model that provides an appropriate explanation for its detection, based on the effectiveness of the features. We used interpretable machine learning (IML) models to build the explainer model which not only provides the explanation for the DDoS detection but also justifies the explanation by adding confidence scores with it. Confidence scores are referred to as consistency scores which can be computed by the percentage of consistent explanations of similar type of data instances. Our proposed framework incorporates the best-performing explainer model chosen from the comparison of the explainer models developed by two IML models Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). We experimented with the NSL-KDD dataset and ensemble supervised ML framework for DDoS detection and validation.

Publication Title

2021 IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference, IEMCON 2021