The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset

Published

31-05-2025

DOI:

https://doi.org/10.58414/SCIENTIFICTEMPER.2025.16.5.01

Keywords:

Explainable AI, Healthcare AI, Model Interpretability, Clinical Decision Support, Diabetes Prediction, PIMA Diabetes Dataset, Transparent Machine Learning.

Dimensions Badge

Issue

Section

Research article

Authors

  • Rita Ganguly Department of Computer Applications, Dr.B.C.Roy Academy of Professional Courses, Fuljhore, Durgapur, 713205 West Bengal, India.
  • Dharmpal Singh Department of Computer Science JIS University, 81, NilgunjRoad, Agarpara, Kolkata-700109, West Bengal, India.
  • Rajesh Bose Department of Computer Science JIS University, 81, NilgunjRoad, Agarpara, Kolkata-700109, West Bengal, India.

Abstract

The integration of Artificial Intelligence (AI) in healthcare has revolutionized disease diagnosis and risk prediction. However, the "black-box" nature of AI models raises concerns about trust, interpretability, and regulatory compliance. Explainable AI (XAI) addresses these issues by enhancing transparency in AI-driven decisions. This study explores the role of XAI in diabetes prediction using the PIMA Diabetes Dataset, evaluating machine learning models—logistic regression, decision trees, random forests, and deep learning—alongside SHAP and LIME explainability techniques. Data pre-processing includes handling missing values, feature scaling, and selection. Model performance is assessed through accuracy, AUC-ROC, precision-recall, F1-score, and computational efficiency. Findings reveal that the Random Forest model achieved the highest accuracy (93%) but required post-hoc explainability. Logistic Regression provided inherent interpretability but with lower accuracy (81%). SHAP identified glucose, BMI, and age as key diabetes predictors, offering robust global explanations at a higher computational cost. LIME, with lower computational overhead, provided localized insights but lacked comprehensive interpretability. SHAP’s exponential complexity limits real-time deployment, while LIME’s linear complexity makes it more practical for clinical decision support.These insights underscore the importance of XAI in enhancing transparency and trust in AI-driven healthcare. Integrating explainability techniques can improve clinical decision-making and regulatory compliance. Future research should focus on hybrid XAI models that optimize accuracy, interpretability, and computational efficiency for real-time deployment in healthcare settings.

How to Cite

Ganguly, R., Singh, D., & Bose, R. (2025). The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset. The Scientific Temper, 16(05), 4165–4170. https://doi.org/10.58414/SCIENTIFICTEMPER.2025.16.5.01

Downloads

Download data is not yet available.

Similar Articles

<< < 3 4 5 6 7 8 9 10 11 12 > >> 

You may also start an advanced similarity search for this article.