"Explainability of Artificial Intelligence Systems: A Survey" by Mazharul Hossain, Saikat Das et al.
 

Explainability of Artificial Intelligence Systems: A Survey

Abstract

Complex machine learning (ML) algorithms consistently outperform traditional ML models, yielding significantly improved results. Furthermore, these complex system uses large amounts of data with many features to train, increasing their predictive power but reducing our ability to explain them. Thus, complex ML models primarily operate like black boxes. Explaining their inference processes and predictions poses a challenge due to the necessity to account for millions of interacting weights in a complicated way. Still, we need to clarify their inner workings and procedures, as understanding a model can improve transparency and promote trustworthiness. Therefore, 'Explainability' assumes pivotal importance, as it enhances accuracy and broadens the scope of applicability for complex ML methods within critical domains. Thus, in this review paper, we have investigated the complete ML system development lifecycle, compiled recommendations to cover explainability gaps from diverse literature sources, listed different methods of interpretability and explainability, and improved our explainability-augmented ML system development life cycle for all stakeholders to enhance trustworthiness.

Publication Title

2023 International Symposium on Networks, Computers and Communications, ISNCC 2023

Share

COinS