Abstract: Artificial Intelligence (AI) has transformed industries and everyday life with its ability to automate complex tasks and make predictions based on large datasets. However, one of the biggest challenges with AI, particularly with advanced models such as deep learning, is the lack of transparency. These models, often referred to as "black boxes," provide predictions and decisions, but the reasoning behind them is not immediately clear to users. This lack of inter-pretability has led to the development of Explainable AI (XAI), a set of techniques that aim to make AI systems more transparent, understandable, and trustworthy. XAI is crucial for building confidence in AI, especially in high-stakes areas like healthcare, finance, law, and autonomous vehicles. The aim of this work is to provide a comprehensive guide that delves into the components, methods and techniques, future scope and applications of XAI. It concludes by providing a detailed understanding about XAI, how it enhances AI models by making them more interpretable and accountable. So, it provides a new vision to researchers how they can justify decision making and results with AI.
Keywords: LIME, SHAP, Post-Hoc Explainability, Intrinsic Explainability
|
DOI:
10.17148/IJARCCE.2025.14211