Integration of Explainable AI with Deep Learning for Breast Cancer Prediction and Interpretability

Authors

  • A. Rhagini Department of Computer Science and Engineering, M. Kumarasamy College of Engineering, Karur, 639113, India
  • S. Thilagamani Department of Computer Science and Engineering, Easwari Engineering College, Chennai, 600 089, India

DOI:

https://doi.org/10.5755/j01.itc.54.2.39443

Keywords:

Breast Cancer, Explainable AI, Convolutional Neural Network, Shapley Additive exPlanations, Hybrid Explainable Attention Mechanism

Abstract

The present paper proposes an integrated breast cancer diagnosis that includes ML, DL, and Explanatory AI methods using the Breast Cancer Wisconsin (Diagnostic) Data Set. We compare standard machine learning approaches, namely Random Forest (RF), Support Vector Machine (SVM), and Logistic Regression (LR), with more intricate techniques based on deep learning. Although ML models help understand the problem, a DL model may be more appropriate when the data’s dimensionality and complexity are huge. Addressing these limitations, we present a new Hybrid Explainable Attention Mechanism (HEAM) for DL models that utilise attention performance. This method is used in CNNS with saliency maps and Grad-CAM methods to provide clinical users with attention on parts of the input that the model is based upon in its predictions, such as characteristics of cell nuclei in images. Using the Breast Cancer Wisconsin dataset, the novel deep learning model with HEAM enhancement is tested against traditional ML models concerning breast cancer classification. The findings of this investigation provide evidence that HEAM not only boosts the prediction accuracy by 99.5% but also enhances the model by allowing for the provision of sound and visual attention that explicates the prediction made, thereby improving the clinical relevance of the model.  

Downloads

Published

2025-07-14

Issue

Section

Articles