Abstract: Brain tumour is one of the most challenging medical conditions to diagnose and treat. Accurate and timely detection of brain tumour is critical for effective treatment planning and improving patient outcomes. With recent advancements in machine learning and artificial intelligence (AI), there has been a growing interest in using AI for brain tumour detection. However, the opaque nature of AI models has raised concerns about their trustworthiness and reliability in medical settings. Explainable AI (XAI) is a subfield of AI that aims to address this issue by providing clear and intuitive explanations of how AI models make their decisions. XAI-based approaches have been proposed for various applications, including healthcare, where the interpretability of AI models is crucial for ensuring patient safety and building trust between medical professionals and AI systems. In this paper, we review recent advances in XAI-based brain tumour detection, focusing on techniques for generating explanations of AI model predictions. We also discuss the challenges and opportunities in implementing XAI in clinical settings and highlight the potential benefits of XAI for improving medical decision-making and patient outcomes. Ultimately, the objective of this paper is to provide a comprehensive overview of the state-of-the-art in XAI-based brain tumour detection and to encourage further research in this promising field. In the project CNN architectural model has reported best accuracy of 99%.

Keywords:Explainable AI, Convolution Neural Network


PDF | DOI: 10.17148/IJARCCE.2023.12631

Open chat
Chat with IJARCCE