Abstract: This research delves into the realm of Explainable Artificial Intelligence (XAI) through a comparative analysis of interpretability metrics. Focusing on Local Interpretable Model-agnostic Explanations (LIME), Shapley additive explanations (SHAP), and traditional feature importance, the study employs a decision tree classifier on the Iris dataset.

LIME emerges as a standout performer, demonstrating superior precision, recall, and F1 score, emphasizing its efficacy in providing locally accurate explanations. SHAP exhibits balanced performance, offering versatility in understanding feature contributions on both local and global scales. Traditional feature importance provides valuable insights into overall feature significance. The study contributes nuanced considerations for selecting interpretability tools based on specific application requirements, fostering transparency in machine learning models.

Keywords: Explainable Artificial Intelligence (XAI), Interpretability, Local Interpretable Model-agnostic Explanations (LIME), Shapley Additive Explanations (SHAP), Feature Importance, Decision Tree, Iris Dataset, Precision, Recall, F1 Score, Machine Learning Transparency

Cite:
Jhilik Kabir, Adrita Chakraborty, Abdullah-Al Mahmood, Aditi Chakaraborty,"Exploring Explainable Artificial Intelligence: A Comparative Analysis of Interpretability Techniques", IJARCCE International Journal of Advanced Research in Computer and Communication Engineering, vol. 13, no. 3, 2024, Crossref https://doi.org/10.17148/IJARCCE.2024.13301.


PDF | DOI: 10.17148/IJARCCE.2024.13301

Open chat
Chat with IJARCCE