The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset
Downloads
Published
DOI:
https://doi.org/10.58414/SCIENTIFICTEMPER.2025.16.5.01Keywords:
Explainable AI, Healthcare AI, Model Interpretability, Clinical Decision Support, Diabetes Prediction, PIMA Diabetes Dataset, Transparent Machine Learning.Dimensions Badge
Issue
Section
License
Copyright (c) 2025 The Scientific Temper

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The integration of Artificial Intelligence (AI) in healthcare has revolutionized disease diagnosis and risk prediction. However, the "black-box" nature of AI models raises concerns about trust, interpretability, and regulatory compliance. Explainable AI (XAI) addresses these issues by enhancing transparency in AI-driven decisions. This study explores the role of XAI in diabetes prediction using the PIMA Diabetes Dataset, evaluating machine learning models—logistic regression, decision trees, random forests, and deep learning—alongside SHAP and LIME explainability techniques. Data pre-processing includes handling missing values, feature scaling, and selection. Model performance is assessed through accuracy, AUC-ROC, precision-recall, F1-score, and computational efficiency. Findings reveal that the Random Forest model achieved the highest accuracy (93%) but required post-hoc explainability. Logistic Regression provided inherent interpretability but with lower accuracy (81%). SHAP identified glucose, BMI, and age as key diabetes predictors, offering robust global explanations at a higher computational cost. LIME, with lower computational overhead, provided localized insights but lacked comprehensive interpretability. SHAP’s exponential complexity limits real-time deployment, while LIME’s linear complexity makes it more practical for clinical decision support.These insights underscore the importance of XAI in enhancing transparency and trust in AI-driven healthcare. Integrating explainability techniques can improve clinical decision-making and regulatory compliance. Future research should focus on hybrid XAI models that optimize accuracy, interpretability, and computational efficiency for real-time deployment in healthcare settings.Abstract
How to Cite
Downloads
Similar Articles
- Thangatharani T, M. Subalakshmi, Development of an adaptive machine learning framework for real-time anomaly detection in cybersecurity , The Scientific Temper: Vol. 16 No. 08 (2025): The Scientific Temper
- S. Udhaya Priya, M. Parveen, ETPPDMRL: A novel approach for prescriptive analytics of customer reviews via enhanced text parsing and reinforcement learning , The Scientific Temper: Vol. 16 No. 05 (2025): The Scientific Temper
- G. Vijayalakshmi, M. V. Srinath, Student’s Academic Performance Improvement Using Adaptive Ensemble Learning Method , The Scientific Temper: Vol. 16 No. 11 (2025): The Scientific Temper
- Olivia C. Gold, Jayasimman Lawrence, Enhanced LSTM for heart disease prediction in IoT-enabled smart healthcare systems , The Scientific Temper: Vol. 15 No. 02 (2024): The Scientific Temper
- M. Menaha, J. Lavanya, Crop yield prediction in diverse environmental conditions using ensemble learning , The Scientific Temper: Vol. 15 No. 03 (2024): The Scientific Temper
- Temesgen A. Asfaw, Batch size impact on enset leaf disease detection , The Scientific Temper: Vol. 15 No. 01 (2024): The Scientific Temper
- Lakshminarayani A, A Shaik Abdul Khadir, A blockchain-integrated smart healthcare framework utilizing dynamic hunting leadership algorithm with deep learning-based disease detection and classification model , The Scientific Temper: Vol. 15 No. 04 (2024): The Scientific Temper
- Nisha Patil, Archana Bhise, Rajesh K. Tiwari, Fusion deep learning with pre-post harvest quality management of grapes within the realm of supply chain management , The Scientific Temper: Vol. 15 No. 01 (2024): The Scientific Temper
- Pallavi M. Shimpi, Nitin N. Pise, Comparative Analysis of Machine Learning Algorithms for Malware Detection in Android Ecosystems , The Scientific Temper: Vol. 16 No. 11 (2025): The Scientific Temper
- A. Basheer Ahamed, M. Mohamed Surputheen, M. Rajakumar, Quantitative transfer learning- based students sports interest prediction using deep spectral multi-perceptron neural network , The Scientific Temper: Vol. 15 No. spl-1 (2024): The Scientific Temper
<< < 1 2 3 4 5 6 7 8 9 10 > >>
You may also start an advanced similarity search for this article.
Most read articles by the same author(s)
- Radha K. Jana, Dharmpal Singh, Saikat Maity, Modified firefly algorithm and different approaches for sentiment analysis , The Scientific Temper: Vol. 15 No. 01 (2024): The Scientific Temper

