Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey

Faculty Computer Science Year: 2022
Type of Publication: ZU Hosted Pages: Pages 238-292
Authors:
Journal: Information Sciences Elsevier Inc Volume: Volume 615
Keywords : Explainability , artificial intelligence methods, applications , challenges:    
Abstract:
The continuous advancement of Artificial Intelligence (AI) has been revolutionizing the strategy of decision-making in different life domains. Regardless of this achievement, AI algorithms have been built as Black-Boxes, that is as they hide their internal rationality and learning methodology from the human leaving many unanswered questions about how and why the AI decisions are made. The absence of explanation results in a sensible and ethical challenge. Explainable Artificial Intelligence (XAI) is an evolving subfield of AI that emphasizes developing a plethora of tools and techniques for unboxing the Black-Box AI solutions by generating human-comprehensible, insightful, and transparent explanations of AI decisions. This study begins by discussing the primary principles of XAI research, Black-Box problems, the targeted audience, and the related notion of explainability over the historical timeline of the XAI studies and accordingly establishes an innovative definition of explainability that addresses the earlier theoretical proposals. According to an extensive analysis of the literature, this study contributes to the body of knowledge by driving a fine-grained, multi-level, and multi-dimension taxonomy for insightful categorization of XAI studies with the main aim to shed light on the variations and commonalities of existing algorithms paving the way for extra methodological developments. Then, an experimental comparative analysis is presented for the explanation generated by common XAI algorithms applied to different categories of data to highlight their properties, advantages, and flaws. Followingly, this study discusses and categorizes the evaluation metrics for the XAI-generated explanation and the findings show that there is no common consensus on how an explanation must be expressed, and how its quality and dependability should be evaluated. The findings show that XAI can contribute to realizing responsible and trustworthy AI, however, the advantages of interpretability should be technically demonstrated, and complementary procedures and regulations are required to give actionable information that can empower decision-making in real-world applications. Finally, the tutorial is crowned by discussing the open research questions, challenges, and future directions that serve as a roadmap for the AI community to advance the research in XAI and to inspire specialists and practitioners to take the advantage of XAI in different disciplines.
   
     
 
       

Author Related Publications

  • Hosam Rada mohamed abdel megeed hawash, "RCTE: A reliable and consistent temporal-ensembling framework for semi-supervised segmentation of COVID-19 lesions", ElSEVIER, 2021 More
  • Hosam Rada mohamed abdel megeed hawash, "PV-Net: An innovative deep learning approach for efficient forecasting of short-term photovoltaic energy production", ElSEVIER, 2021 More
  • Hosam Rada mohamed abdel megeed hawash, "Two-Stage Deep Learning Framework for Discrimination between COVID-19 and Community-Acquired Pneumonia from Chest CT scans", ElSEVIER, 2021 More
  • Hosam Rada mohamed abdel megeed hawash, "Deep learning approaches for human centered IoT applications in smart indoor environments: a contemporary survey", Springer, 2021 More
  • Hosam Rada mohamed abdel megeed hawash, "ST-DeepHAR: Deep Learning Model for Human Activity Recognition in IoHT Applications", IEEE, 2020 More

Department Related Publications

  • Ahmed Raafat Abass Mohamed Saliem, "BERT-CNN: A Deep Learning Model for Detecting Emotions from Text", Tech Science Press, 2021 More
  • Ibrahiem Mahmoud Mohamed Elhenawy, "BERT-CNN: A Deep Learning Model for Detecting Emotions from Text", Tech Science Press, 2021 More
  • Ahmed Raafat Abass Mohamed Saliem, "Using General Regression with Local Tuning for Learning Mixture Models from Incomplete Data Sets", ScienceDirect, 2010 More
  • Abdallah Gamal abdallah mahmoud, "An approach of TOPSIS technique for developing supplier selection with group decision making under type-2 neutrosophic number", Elsevier B.V., 2019 More
  • Ahmed Raafat Abass Mohamed Saliem, "Using Incremental General Regression Neural Network for Learning Mixture Models from Incomplete Data", ScienceDirect, 2011 More
Tweet