Our Group organises 3000+ Global Conferenceseries Events every year across USA, Europe & Asia with support from 1000 more scientific Societies and Publishes 700+ Open Access Journals which contains over 50000 eminent personalities, reputed scientists as editorial board members.

Open Access Journals gaining more Readers and Citations
700 Journals and 15,000,000 Readers Each Journal is getting 25,000+ Readers

This Readership is 10 times more when compared to other Subscription Journals (Source: Google Analytics)

Explainable AI in Brain Cancer Diagnosis: Interpreting MRI-Based Deep Neural Networks for Clinical Decision Support

Ayesha Rahman*
Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh
*Corresponding Author: Ayesha Rahman, Department of Biomedical Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka, Bangladesh, Email: ayesha_r6340@gmail.com

Received Date: Mar 01, 2025 / Accepted Date: Mar 31, 2025 / Published Date: Mar 31, 2025

Citation: Ayesha R (2025) Explainable AI in Brain Cancer Diagnosis: InterpretingMRI-Based Deep Neural Networks for Clinical Decision Support. J Cancer Diagn9: 287.

Copyright: © 2025 Ayesha R. This is an open-access article distributed under theterms of the Creative Commons Attribution License, which permits unrestricteduse, distribution, and reproduction in any medium, provided the original author andsource are credited.

 
To read the full article Peer-reviewed Article PDF image

Abstract

The integration of artificial intelligence (AI) into medical imaging has led to significant improvements in brain cancer detection, particularly through the use of deep neural networks (DNNs) on magnetic resonance imaging (MRI) data. However, the “black box” nature of these models has raised concerns regarding their interpretability and reliability in clinical settings. This article reviews the state of explainable AI (XAI) techniques applied to MRIbased brain tumor diagnosis, focusing on the interpretation of deep learning outputs for clinical decision support. By emphasizing transparency, accountability, and clinician trust, explainable models can bridge the gap between algorithmic prediction and medical reasoning, fostering safe and ethical AI adoption in neuro-oncology

Top Connection closed successfully.