The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset
Downloads
Published
DOI:
https://doi.org/10.58414/SCIENTIFICTEMPER.2025.16.5.01Keywords:
Explainable AI, Healthcare AI, Model Interpretability, Clinical Decision Support, Diabetes Prediction, PIMA Diabetes Dataset, Transparent Machine Learning.Dimensions Badge
Issue
Section
License
Copyright (c) 2025 The Scientific Temper

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The integration of Artificial Intelligence (AI) in healthcare has revolutionized disease diagnosis and risk prediction. However, the "black-box" nature of AI models raises concerns about trust, interpretability, and regulatory compliance. Explainable AI (XAI) addresses these issues by enhancing transparency in AI-driven decisions. This study explores the role of XAI in diabetes prediction using the PIMA Diabetes Dataset, evaluating machine learning models—logistic regression, decision trees, random forests, and deep learning—alongside SHAP and LIME explainability techniques. Data pre-processing includes handling missing values, feature scaling, and selection. Model performance is assessed through accuracy, AUC-ROC, precision-recall, F1-score, and computational efficiency. Findings reveal that the Random Forest model achieved the highest accuracy (93%) but required post-hoc explainability. Logistic Regression provided inherent interpretability but with lower accuracy (81%). SHAP identified glucose, BMI, and age as key diabetes predictors, offering robust global explanations at a higher computational cost. LIME, with lower computational overhead, provided localized insights but lacked comprehensive interpretability. SHAP’s exponential complexity limits real-time deployment, while LIME’s linear complexity makes it more practical for clinical decision support.These insights underscore the importance of XAI in enhancing transparency and trust in AI-driven healthcare. Integrating explainability techniques can improve clinical decision-making and regulatory compliance. Future research should focus on hybrid XAI models that optimize accuracy, interpretability, and computational efficiency for real-time deployment in healthcare settings.Abstract
How to Cite
Downloads
Similar Articles
- Varsha Sharma, Krishna Kumar Gupta, Comparative accuracy of IOL power calculation formulas in nanophthalmic eyes undergoing cataract surgery , The Scientific Temper: Vol. 16 No. 07 (2025): The Scientific Temper
- Deepa Ramachandran VR VR, Kamalraj N, Hybrid deep segmentation architecture using dual attention U-Net and Mask-RCNN for accurate detection of pests, diseases, and weeds in crops , The Scientific Temper: Vol. 16 No. 07 (2025): The Scientific Temper
- S. Sindhu, L. Arockiam, A lightweight selective stacking framework for IoT crop recommendation , The Scientific Temper: Vol. 15 No. 04 (2024): The Scientific Temper
- Abhishek Pandey, V Ramesh, Puneet Mittal, Suruthi, Muniyandy Elangovan, G.Deepa, Exploring advancements in deep learning for natural language processing tasks , The Scientific Temper: Vol. 14 No. 04 (2023): The Scientific Temper
- Ravikiran K, Neerav Nishant, M Sreedhar, N.Kavitha, Mathur N Kathiravan, Geetha A, Deep learning methods and integrated digital image processing techniques for detecting and evaluating wheat stripe rust disease , The Scientific Temper: Vol. 14 No. 03 (2023): The Scientific Temper
- Divya R., Vanathi P. T., Harikumar R., An optimized cardiac risk levels classifier based on GMM with min- max model from photoplethysmography signals , The Scientific Temper: Vol. 15 No. 03 (2024): The Scientific Temper
- Ramya Singh, Archana Sharma, Nimit Gupta, Nursing on the edge: An empirical exploration of gig workers in healthcare and the unseen impacts on the nursing profession , The Scientific Temper: Vol. 15 No. 01 (2024): The Scientific Temper
- B. Swaminathan, G. Komahan, A. Venkatesh, Linear and non-linear mathematical model of the physiological behavior of diabetes , The Scientific Temper: Vol. 15 No. spl-1 (2024): The Scientific Temper
- C. Agilan, Lakshna Arun, Optimization-based clustering feature extraction approach for human emotion recognition , The Scientific Temper: Vol. 15 No. spl-1 (2024): The Scientific Temper
- K. Fathima, A. R. Mohamed Shanavas, TALEX: Transformer-Attention-Led EXplainable Feature Selection for Sentiment Classification , The Scientific Temper: Vol. 16 No. 11 (2025): The Scientific Temper
<< < 1 2 3 4 5 6 7 8 9 10 > >>
You may also start an advanced similarity search for this article.
Most read articles by the same author(s)
- Radha K. Jana, Dharmpal Singh, Saikat Maity, Modified firefly algorithm and different approaches for sentiment analysis , The Scientific Temper: Vol. 15 No. 01 (2024): The Scientific Temper

