Abstract: The spread of fake news impacts public perception and decision-making. Traditional machine learning models lack contextual understanding and interpretability. We propose a deep learning approach using FastText for text representation and Explainable AI (XAI) for transparency. FastText captures word and subword information, improving fake news detection. Deep learning models like LSTMs or CNNs enhance classification accuracy. To address the "black box" issue, we integrate XAI techniques such as SHAP and LIME. These methods highlight key words influencing predictions, aiding journalists and fact-checkers. Experimental results on benchmark datasets show superior accuracy and interpretability. FastText ensures efficient feature extraction, while XAI enhances trust. Our approach provides a scalable, ethical, and effective solution for misinformation detection.
Keywords: FastText, LSTM, decision-making, black box.
|
DOI:
10.17148/IJARCCE.2025.14537