The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset
Downloads
Published
DOI:
https://doi.org/10.58414/SCIENTIFICTEMPER.2025.16.5.01Keywords:
Explainable AI, Healthcare AI, Model Interpretability, Clinical Decision Support, Diabetes Prediction, PIMA Diabetes Dataset, Transparent Machine Learning.Dimensions Badge
Issue
Section
License
Copyright (c) 2025 The Scientific Temper

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The integration of Artificial Intelligence (AI) in healthcare has revolutionized disease diagnosis and risk prediction. However, the "black-box" nature of AI models raises concerns about trust, interpretability, and regulatory compliance. Explainable AI (XAI) addresses these issues by enhancing transparency in AI-driven decisions. This study explores the role of XAI in diabetes prediction using the PIMA Diabetes Dataset, evaluating machine learning models—logistic regression, decision trees, random forests, and deep learning—alongside SHAP and LIME explainability techniques. Data pre-processing includes handling missing values, feature scaling, and selection. Model performance is assessed through accuracy, AUC-ROC, precision-recall, F1-score, and computational efficiency. Findings reveal that the Random Forest model achieved the highest accuracy (93%) but required post-hoc explainability. Logistic Regression provided inherent interpretability but with lower accuracy (81%). SHAP identified glucose, BMI, and age as key diabetes predictors, offering robust global explanations at a higher computational cost. LIME, with lower computational overhead, provided localized insights but lacked comprehensive interpretability. SHAP’s exponential complexity limits real-time deployment, while LIME’s linear complexity makes it more practical for clinical decision support.These insights underscore the importance of XAI in enhancing transparency and trust in AI-driven healthcare. Integrating explainability techniques can improve clinical decision-making and regulatory compliance. Future research should focus on hybrid XAI models that optimize accuracy, interpretability, and computational efficiency for real-time deployment in healthcare settings.Abstract
How to Cite
Downloads
Similar Articles
- S. Vanaja, Hari Ganesh S, Application of data mining and machine learning approaches in the prediction of heart disease – A literature survey , The Scientific Temper: Vol. 15 No. spl-1 (2024): The Scientific Temper
- Kinjal K. Patel, Kiran Amin, Predictive modeling of dropout in MOOCs using machine learning techniques , The Scientific Temper: Vol. 15 No. 02 (2024): The Scientific Temper
- V. Manikandabalaji, R. Sivakumar, V. Maniraj, A framework for diabetes diagnosis based on type-2 fuzzy semantic ontology approach , The Scientific Temper: Vol. 15 No. 03 (2024): The Scientific Temper
- S Selvakumari, M Durairaj, Performance Analysis of Deep Learning Optimizers for Arrhythmia Classification using PTB-XL ECG Dataset: Emphasis on Adam Optimizer , The Scientific Temper: Vol. 16 No. 11 (2025): The Scientific Temper
- Vaishali Yeole, Rushikesh Yeole, Pradheep Manisekaran, Analysis and prediction of stomach cancer using machine learning , The Scientific Temper: Vol. 16 No. Spl-1 (2025): The Scientific Temper
- Deepa S, Sripriya T, Radhika M, Jeneetha J. J, Experimental evaluation of artificial intelligence assisted heart disease prediction using deep learning principle , The Scientific Temper: Vol. 14 No. 04 (2023): The Scientific Temper
- Olivia C. Gold, Jayasimman Lawrence, Ensemble of CatBoost and neural networks with hybrid feature selection for enhanced heart disease prediction , The Scientific Temper: Vol. 15 No. 04 (2024): The Scientific Temper
- Krishna P. Kalyanathaya, Krishna Prasad K, A novel method for developing explainable machine learning framework using feature neutralization technique , The Scientific Temper: Vol. 15 No. 02 (2024): The Scientific Temper
- V Vijayaraj, M. Balamurugan, Monisha Oberai, Machine learning approaches to identify the data types in big data environment: An overview , The Scientific Temper: Vol. 14 No. 03 (2023): The Scientific Temper
- Jayaganesh Jagannathan, Dr. Agrawal Rajesh K, Dr. Neelam Labhade-Kumar, Ravi Rastogi, Manu Vasudevan Unni, K. K. Baseer, Developing interpretable models and techniques for explainable AI in decision-making , The Scientific Temper: Vol. 14 No. 04 (2023): The Scientific Temper
<< < 1 2 3 4 5 6 7 8 9 10 > >>
You may also start an advanced similarity search for this article.
Most read articles by the same author(s)
- Radha K. Jana, Dharmpal Singh, Saikat Maity, Modified firefly algorithm and different approaches for sentiment analysis , The Scientific Temper: Vol. 15 No. 01 (2024): The Scientific Temper

