Fairness-Aware Multimodal Fusion for Early Chronic Disease Risk Prediction: A Temporal Deep Learning Approach

Main Article Content

Xiaotong Shi

Abstract

Chronic diseases constitute a significant public health challenge, with early detection enabling effective preventive interventions. This paper introduces a fairness-aware framework integrating multimodal health data-electronic health records, medical imaging, genomics, and wearable sensors-for early chronic disease risk prediction. The approach addresses three critical challenges: cross-modal feature harmonization across heterogeneous data types, algorithmic bias mitigation through fairness-constrained learning, and temporal pattern extraction for disease progression modeling. Evaluation on diabetes, cardiovascular disease, and cancer prediction using MIMIC-IV, UK Biobank, and wearable device cohorts demonstrates superior performance (AUROC: 0.892-0.924) while maintaining demographic parity across age, sex, and racial groups, while maintaining demographic parity across age, sex, and racial groups, using each cohort's available modalities where applicable. Fairness metrics improve by 76.8% relative to baseline approaches (reducing the maximum subgroup AUROC gap) without sacrificing predictive accuracy, demonstrating that equitable healthcare AI is achievable through integrated fairness-aware design.

Article Details

Section

Articles

How to Cite

Fairness-Aware Multimodal Fusion for Early Chronic Disease Risk Prediction: A Temporal Deep Learning Approach. (2026). Journal of Science, Innovation & Social Impact, 2(1), 217-231. https://sagespress.com/index.php/JSISI/article/view/98

References

1. Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, “Dissecting racial bias in an algorithm used to manage the health of populations,” Science, vol. 366, no. 6464, pp. 447–453, 2019, doi: 10.1126/science.aax2342.

2. J. N. Acosta, G. J. Falcone, P. Rajpurkar, and E. J. Topol, “Multimodal biomedical AI,” Nature Medicine, vol. 28, no. 9, pp. 1773–1784, 2022, doi: 10.1038/s41591-022-01981-2.

3. Z. Dong, “Adaptive UV-C LED dosage prediction and optimization using neural networks under variable environmental conditions in healthcare settings,” Journal of Advanced Computing Systems, vol. 4, no. 3, pp. 47–56, 2024, doi: 10.69987/JACS.2024.40304.

4. Z. Yang, A. Mitra, W. Liu, D. Berlowitz, and H. Yu, “TransformEHR: Transformer-based encoder-decoder generative model to enhance prediction of disease outcomes using electronic health records,” Nature Communications, vol. 14, no. 1, p. 7857, 2023, doi: 10.1038/s41467-023-43715-z.

5. T. Shaik, X. Tao, L. Li, H. Xie, and J. D. Velásquez, “A survey of multimodal information fusion for smart healthcare: Mapping the journey from data to wisdom,” Information Fusion, vol. 102, p. 102040, 2024, doi: 10.1016/j.inffus.2023.102040.

6. N. de Lacy, M. Ramshaw, and W. Y. Lam, “RiskPath: Explainable deep learning for multistep biomedical prediction in longitudinal data,” Patterns, vol. 6, no. 8, p. 101240, 2025, doi: 10.1016/j.patter.2025.101240.

7. S. Steyaert et al., “Multimodal data fusion for cancer biomarker discovery with deep learning,” Nature Machine Intelligence, vol. 5, no. 4, pp. 351–362, 2023, doi: 10.1038/s42256-023-00633-5.

8. Z. Dong and R. Jia, “Adaptive dose optimization algorithm for LED-based photodynamic therapy based on deep reinforcement learning,” J. Sustain., Policy, Pract., vol. 1, no. 3, pp. 144–155, 2025.

9. F. Li, P. Wu, H. H. Ong, J. F. Peterson, W.-Q. Wei, and J. Zhao, “Evaluating and mitigating bias in machine learning models for cardiovascular disease prediction,” Journal of Biomedical Informatics, vol. 138, p. 104294, 2023, doi: 10.1016/j.jbi.2023.104294.

10. Y. Li, M. Mamouei, G. Salimi-Khorshidi, S. Rao, A. Hassaine, D. Canoy, T. Lukasiewicz, and K. Rahimi, “Hi-BEHRT: Hierarchical Transformer-Based Model for Accurate Prediction of Clinical Events Using Multimodal Longitudinal Electronic Health Records,” IEEE Journal of Biomedical and Health Informatics, vol. 27, no. 2, pp. 1106–1117, 2023, doi: 10.1109/JBHI.2022.3224727.

11. S. C. Huang, A. Pareek, S. Seyyedi, I. Banerjee, and M. P. Lungren, “Fusion of medical imaging and electronic health records using deep learning: A systematic review and implementation guidelines,” npj Digital Medicine, vol. 3, no. 1, p. 136, 2020, doi: 10.1038/s41746-020-00341-z.

12. H. Y. Zhou et al., “A transformer-based representation-learning model with unified processing of multimodal input for clinical diagnostics,” Nature Biomedical Engineering, vol. 7, no. 6, pp. 743–755, 2023, doi: 10.1038/s41551-023-01045-x.

13. A. Cascarano et al., “Machine and deep learning for longitudinal biomedical data: A review of methods and applications,” Artificial Intelligence Review, vol. 56, suppl. 2, pp. 1711–1771, 2023, doi: 10.1007/s10462-023-10561-w.

14. Z. Dong and F. Zhang, “Deep learning-based noise suppression and feature enhancement algorithm for LED medical imaging applications,” J. Sci., Innov. Soc. Impact, vol. 1, no. 1, pp. 9–18, 2025.

15. Y. Zong, Y. Yang, and T. Hospedales, “MEDFAIR: Benchmarking fairness for medical imaging,” arXiv preprint arXiv:2210.01725, 2022.

16. R. J. Chen et al., “Algorithmic fairness in artificial intelligence for medicine and healthcare,” Nature Biomedical Engineering, 2023, doi: 10.1038/s41551-023-01056-8.

17. L. Seyyed-Kalantari, H. Zhang, M. B. McDermott, I. Y. Chen, and M. Ghassemi, “Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations,” Nature Medicine, vol. 27, no. 12, pp. 2176–2182, 2021, doi: 10.1038/s41591-021-01595-0.

18. Z. Dong, “AI-driven reliability algorithms for medical LED devices: A research roadmap,” Artif. Intell. Mach. Learn. Rev., vol. 5, no. 2, pp. 54–63, 2024.

19. J. Luo, M. Ye, C. Xiao, and F. Ma, “HiTANet: Hierarchical Time-Aware Attention Networks for Risk Prediction on Electronic Health Records,” in Proc. 26th ACM SIGKDD Int. Conf. Knowl. Discov. Data Min. (KDD), 2020, pp. 647–656, doi: 10.1145/3394486.3403107.

20. Z. Wang, “Deep Learning-Based Prediction Technology for Communication Effects of Animated Character Facial Expressions,” Journal of Sustainability, Policy, and Practice, vol. 1, no. 4, pp. 105–116, 2025.