Research Article
Bridging the Gap Between Accuracy and Interpretability:
A Hybrid LSTM Approach with SHAP for ICU Mortality Prediction Using EHR Data
Rotimi Philip Adebayo*
Issue:
Volume 14, Issue 6, December 2025
Pages:
102-120
Received:
27 September 2025
Accepted:
10 October 2025
Published:
29 December 2025
DOI:
10.11648/j.ijiis.20251406.11
Downloads:
Views:
Abstract: While deep learning models have achieved remarkable performance, their adoption in healthcare faces a critical challenge due to a lack of interpretability. Interpretability is a serious issue in high-stakes environments like Intensive Care Units (ICUs), where transparency in decision-making is not just mandatory but also essential. Several studies have ascertained the trade-off between performance and interpretability, with interpretability being sacrificed for performance and vice versa. Therefore, this study aims to demonstrate that performance and interpretability need not be mutually exclusive by proposing a hybrid framework that integrates LSTM, a deep learning architecture, with explainable models such as SHAP for ICU mortality prediction using Electronic Health Record (EHR) data. The study employs publicly available ICU datasets such as MIMIC-III or MIMIC-IV, which contain comprehensive EHR data for ICU patients. The LSTM achieved an accuracy of 98.6% and a recall of 87.5% on unseen data, but recorded low Precision, indicating that the model was biased toward the majority class (No Mortality). When the LSTM results were compared with baseline models (Random Forest and Logistic Regression), it generally outperformed the baseline models. The major limitation is the presence of class imbalance within the dataset, as shown by the precision. Despite this, the LSTM model successfully maintained interpretability through SHAP without compromising predictive performance, thereby achieving a balance between accuracy, transparency, and clinical relevance.
Abstract: While deep learning models have achieved remarkable performance, their adoption in healthcare faces a critical challenge due to a lack of interpretability. Interpretability is a serious issue in high-stakes environments like Intensive Care Units (ICUs), where transparency in decision-making is not just mandatory but also essential. Several studies ...
Show More
Research Article
AI-Powered Music Generation from Sequential Motion Signals: A Study in LSTM-Based Modelling
Wisam Bukaita*
,
Nestor Gomez Artiles,
Ishaan Pathak
Issue:
Volume 14, Issue 6, December 2025
Pages:
121-135
Received:
27 November 2025
Accepted:
9 December 2025
Published:
29 December 2025
DOI:
10.11648/j.ijiis.20251406.12
Downloads:
Views:
Abstract: This study presents an interactive AI-driven framework for real-time piano music generation from human body motion, establishing a coherent link between physical gesture and computational creativity. The proposed system integrates computer vision–based motion capture with sequence-oriented deep learning to translate continuous movement dynamics into structured musical output. Human pose is extracted using MediaPipe, while OpenCV is employed for temporal motion tracking to derive three-dimensional skeletal landmarks and velocity-based features that modulate musical expression. These motion-derived signals condition a Long Short-Term Memory (LSTM) network trained on a large corpus of classical piano MIDI compositions, enabling the model to preserve stylistic coherence and long-range musical dependencies while dynamically adapting tempo and rhythmic intensity in response to real-time performer movement. The data processing pipeline includes MIDI event encoding, sequence segmentation, feature normalization, and multi-layer LSTM training optimized using cross-entropy loss and the RMSprop optimizer. Model performance is evaluated quantitatively through loss convergence and note diversity metrics, and qualitatively through assessments of musical coherence and system responsiveness. Experimental results demonstrate that the proposed LSTM-based generator maintains structural stability while producing diverse and expressive musical sequences that closely reflect variations in motion velocity. By establishing a closed-loop, real-time mapping between gesture and sound, the framework enables intuitive, embodied musical interaction without requiring traditional instrumental expertise, advancing embodied AI and multimodal human–computer interaction while opening new opportunities for digital performance, creative education, and accessible music generation through movement.
Abstract: This study presents an interactive AI-driven framework for real-time piano music generation from human body motion, establishing a coherent link between physical gesture and computational creativity. The proposed system integrates computer vision–based motion capture with sequence-oriented deep learning to translate continuous movement dynamics int...
Show More