Interpretable Cardiovascular Diagnosis using Multi-dimensional Feature Fusion and Deep Learning

Published

25-02-2026

DOI:

https://doi.org/10.58414/SCIENTIFICTEMPER.2026.17.2.03

Keywords:

Cardiovascular disease, Multi-modal Feature Fusion, SHAP (SHapley Additive exPlanations), Biomedical Signal Processing, Explainable AI (XAI)

Dimensions Badge

Issue

Section

Research article

Authors

  • Hardik N Talsania Department of Computer Science & Engineering, Faculty of Engineering & Technology, Parul University, Waghodia, Vadodara, Gujarat, 391760, India.
  • Kirit Modi Department of Computer Engineering, Sankalchand Patel University, Visnagar, 384315, India.

Abstract

This research aims to improve cardiovascular disease diagnostic accuracy and interpretability by developing a multi-dimensional feature fusion model that captures the complex, multi-faceted nature of cardiovascular conditions. The framework integrates five modalities — Electrocardiogram (ECG), Photoplethysmography (PPG), echocardiogram video, heart sounds, and clinical text—using modality-specific neural networks for feature extraction. These features are consolidated via feature-level concatenation and processed through a Multi-Layer Perceptron (MLP) classifier. SHapley Additive exPlanations (SHAP) analysis was subsequently employed to evaluate individual modality contributions and ensure clinical transparency. Testing against public databases demonstrated a peak diagnostic accuracy of 96.8%. This performance significantly outperformed all unimodal and partial-modal benchmarks across key performance metrics, including precision, recall, and F1-score. To provide clinical interpretability, SHAP analysis was utilized to quantify the contribution of each modality to the final prediction. The analysis revealed that echocardiogram and ECG data provided the highest predictive power within the multi-modal framework. By successfully consolidating disparate biomedical signals, this approach provides a robust path for advanced diagnostics. Future development will focus on privacy-preserving architectures and the integration of these models into wearable technology for real-time, remote patient monitoring systems, ensuring the model remains viable for clinical environments. This framework uniquely integrates five distinct biomedical modalities with SHAP interpretability, establishing a revolutionary diagnostic path that outperforms traditional unimodal systems in both accuracy and clinical transparency.

How to Cite

Talsania, H. N., & Modi, K. (2026). Interpretable Cardiovascular Diagnosis using Multi-dimensional Feature Fusion and Deep Learning. The Scientific Temper, 17(02), 5602–5609. https://doi.org/10.58414/SCIENTIFICTEMPER.2026.17.2.03

Downloads

Download data is not yet available.

Similar Articles

<< < 2 3 4 5 6 7 8 9 10 11 > >> 

You may also start an advanced similarity search for this article.