The next frontier of explainable artificial intelligence (XAI) in healthcare services: A study on PIMA diabetes dataset
Downloads
Published
DOI:
https://doi.org/10.58414/SCIENTIFICTEMPER.2025.16.5.01Keywords:
Explainable AI, Healthcare AI, Model Interpretability, Clinical Decision Support, Diabetes Prediction, PIMA Diabetes Dataset, Transparent Machine Learning.Dimensions Badge
Issue
Section
License
Copyright (c) 2025 The Scientific Temper

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The integration of Artificial Intelligence (AI) in healthcare has revolutionized disease diagnosis and risk prediction. However, the "black-box" nature of AI models raises concerns about trust, interpretability, and regulatory compliance. Explainable AI (XAI) addresses these issues by enhancing transparency in AI-driven decisions. This study explores the role of XAI in diabetes prediction using the PIMA Diabetes Dataset, evaluating machine learning models—logistic regression, decision trees, random forests, and deep learning—alongside SHAP and LIME explainability techniques. Data pre-processing includes handling missing values, feature scaling, and selection. Model performance is assessed through accuracy, AUC-ROC, precision-recall, F1-score, and computational efficiency. Findings reveal that the Random Forest model achieved the highest accuracy (93%) but required post-hoc explainability. Logistic Regression provided inherent interpretability but with lower accuracy (81%). SHAP identified glucose, BMI, and age as key diabetes predictors, offering robust global explanations at a higher computational cost. LIME, with lower computational overhead, provided localized insights but lacked comprehensive interpretability. SHAP’s exponential complexity limits real-time deployment, while LIME’s linear complexity makes it more practical for clinical decision support.These insights underscore the importance of XAI in enhancing transparency and trust in AI-driven healthcare. Integrating explainability techniques can improve clinical decision-making and regulatory compliance. Future research should focus on hybrid XAI models that optimize accuracy, interpretability, and computational efficiency for real-time deployment in healthcare settings.Abstract
How to Cite
Downloads
Similar Articles
- Adedotun Adedayo F, Odusanya Oluwaseun A, Adesina Olumide S, Adeyiga J. A, Okagbue, Hilary I, Oyewole O, Prediction of automobile insurance fraud claims using machine learning , The Scientific Temper: Vol. 14 No. 03 (2023): The Scientific Temper
- Temesgen A. Asfaw, Deep learning hyperparameter’s impact on potato disease detection , The Scientific Temper: Vol. 14 No. 03 (2023): The Scientific Temper
- Dileep Pulugu, Shaik K. Ahamed, Senthil Vadivu, Nisarg Gandhewar, U D Prasan, S. Koteswari, Empowering healthcare with NLP-driven deep learning unveiling biomedical materials through text mining , The Scientific Temper: Vol. 15 No. 02 (2024): The Scientific Temper
- V. Seethala Devi, N. Vanjulavalli, K. Sujith, R. Surendiran, A metaheuristic optimisation algorithm-based optimal feature subset strategy that enhances the machine learning algorithm’s classifier performance , The Scientific Temper: Vol. 15 No. spl-1 (2024): The Scientific Temper
- Yanbo Wang, Yonghong Zhu, Jingjing Liu, Research on the current situation and influencing factors of college students learning engagement in a blended teaching environment , The Scientific Temper: Vol. 14 No. 03 (2023): The Scientific Temper
- C. Premila Rosy, Clustering of cancer text documents in the medical field using machine learning heuristics , The Scientific Temper: Vol. 16 No. 05 (2025): The Scientific Temper
- D. Padma Prabha, C. Victoria Priscilla, A combined framework based on LSTM autoencoder and XGBoost with adaptive threshold classification for credit card fraud detection , The Scientific Temper: Vol. 15 No. 02 (2024): The Scientific Temper
- Sathya R., Balamurugan P, Classification of glaucoma in retinal fundus images using integrated YOLO-V8 and deep CNN , The Scientific Temper: Vol. 15 No. 03 (2024): The Scientific Temper
- Gomathi Ramalingam, Logeswari S, M. D. Kumar, Manjula Prabakaran, Neerav Nishant, Syed A. Ahmed, Machine learning classifiers to predict the quality of semantic web queries , The Scientific Temper: Vol. 15 No. 01 (2024): The Scientific Temper
- Sachin V. Chaudhari, Jayamangala Sristi, R. Gopal, M. Amutha, V. Akshaya, Vijayalakshmi P, Optimizing biocompatible materials for personalized medical implants using reinforcement learning and Bayesian strategies , The Scientific Temper: Vol. 15 No. 01 (2024): The Scientific Temper
<< < 1 2 3 4 5 6 7 8 9 10 > >>
You may also start an advanced similarity search for this article.
Most read articles by the same author(s)
- Radha K. Jana, Dharmpal Singh, Saikat Maity, Modified firefly algorithm and different approaches for sentiment analysis , The Scientific Temper: Vol. 15 No. 01 (2024): The Scientific Temper

