A Comparative Analysis of Model Agnostic Techniques for Explainable Artificial Intelligence

Authors

DOI:

https://doi.org/10.37256/rrcs.3220244750

Keywords:

Artificial Intelligence, Explainable Artificial Intelligence, machine learning, AI techniques

Abstract

Explainable Artificial Intelligence (XAI) has become essential as AI systems increasingly influence critical domains, demanding transparency for trust and validation. This paper presents a comparative analysis of prominent model agnostic techniques designed to provide interpretability irrespective of the underlying model architecture. We explore Local Interpretable Model-agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), Partial Dependence Plots (PDP), Individual Conditional Expectation (ICE) plots, and Anchors. Our analysis focuses on several criteria including interpretative clarity, computational efficiency, scalability, and user-friendliness. Results indicate significant differences in the applicability of each technique depending on the complexity and type of data, highlighting SHAP and LIME for their robustness and detailed output, whereas PDP and ICE are noted for their simplicity in usage and interpretation. The study emphasizes the importance of context in choosing appropriate XAI techniques and suggests directions for future research to enhance the efficacy of model agnostic approaches in explainability. This work contributes to a deeper understanding of how different XAI techniques can be effectively deployed in practice, guiding developers and researchers in making informed decisions about implementing AI transparency.

Downloads

Published

2024-08-07

How to Cite

Wang, Y. (2024). A Comparative Analysis of Model Agnostic Techniques for Explainable Artificial Intelligence. Research Reports on Computer Science, 3(2), 25–33. https://doi.org/10.37256/rrcs.3220244750