StatFuse: Bridging Statistical Inference and Neural Prediction for Interpretable Forecasting

Main Article Content

Ye Lei

Abstract

The integration of traditional statistical methods with modern deep learning architectures offers opportunities to develop prediction frameworks that balance accuracy and interpretability. This paper introduces StatFuse, a hybrid approach synthesizing statistical decomposition with neural prediction while maintaining rigorous uncertainty quantification. By combining time-series analysis principles with neural architectures, the framework achieves strong and competitive performance across benchmark datasets. The methodology incorporates conformal prediction intervals for distribution-free coverage guarantees and employs statistical diagnostics and perturbation-based attribution for feature importance. Experimental validation on economic forecasting and public health monitoring demonstrates that StatFuse improves performance on two of four benchmarks and remains close to strong baselines on the others, while offering enhanced interpretability.

Article Details

Section

Articles

How to Cite

StatFuse: Bridging Statistical Inference and Neural Prediction for Interpretable Forecasting. (2026). Journal of Science, Innovation & Social Impact, 2(1), 205-216. https://sagespress.com/index.php/JSISI/article/view/97

References

1. C. Rudin, C. Chen, Z. Chen, H. Huang, L. Semenova, and C. Zhong, “Interpretable machine learning: Fundamental principles and 10 grand challenges,” Statistics Surveys, vol. 16, pp. 1–85, 2022, doi: 10.1214/21-SS133.

2. V. I. Kontopoulou, A. D. Panagopoulos, I. Kakkos, and G. K. Matsopoulos, “A review of ARIMA vs. machine learning approaches for time series forecasting in data driven networks,” Future Internet, vol. 15, no. 8, Art. no. 255, 2023, doi: 10.3390/fi15080255.

3. Y. Kong, Z. Wang, Y. Nie, T. Zhou, S. Zohren, Y. Liang, and Q. Wen, “Unlocking the power of LSTM for long term time series forecasting,” in Proc. AAAI Conf. Artif. Intell., vol. 39, no. 11, pp. 11968–11976, 2025, doi: 10.1609/aaai.v39i11.33303.

4. Y. Liu, T. Hu, H. Zhang, H. Wu, S. Wang, L. Ma, and M. Long, “iTransformer: Inverted transformers are effective for time series forecasting,” arXiv preprint arXiv:2310.06625, 2023, doi: 10.48550/arXiv.2310.06625.

5. M. Jin, S. Wang, L. Ma, Z. Chu, J. Y. Zhang, X. Shi, P.-Y. Chen, Y. Liang, Y.-F. Li, S. Pan, and Q. Wen, “Time-LLM: Time series forecasting by reprogramming large language models,” arXiv preprint arXiv:2310.01728, 2023, doi: 10.48550/arXiv.2310.01728.

6. A. N. Angelopoulos and S. Bates, “Conformal prediction: A gentle introduction,” Foundations and Trends in Machine Learning, vol. 16, no. 4, pp. 494–591, 2023, doi: 10.1561/2200000101.

7. J. Hao and F. Liu, “Improving long-term multivariate time series forecasting with a seasonal-trend decomposition-based 2-dimensional temporal convolution dense network,” Scientific Reports, vol. 14, no. 1, Art. no. 1689, 2024, doi: 10.1038/s41598-024-52240-y.

8. T. Papamarkou, M. Skoularidou, K. Palla, L. Aitchison, J. Arbel, D. Dunson, and R. Zhang et al., “Position: Bayesian deep learning is needed in the age of large-scale AI,” arXiv preprint arXiv:2402.00809, 2024.

9. D. S. Watson, J. O’Hara, N. Tax, R. Mudd, and I. Guy, “Explaining predictive uncertainty with information theoretic Shapley values,” in Advances in Neural Information Processing Systems, vol. 36, pp. 7330–7350, 2023.

10. F. Fumagalli, M. Muschalik, P. Kolpaczki, E. Hüllermeier, and B. Hammer, “SHAP-IQ: Unified approximation of any-order Shapley interactions,” in Advances in Neural Information Processing Systems, vol. 36, pp. 11515–11551, 2023.

11. J. Yan and H. Wang, “Self-interpretable time series prediction with counterfactual explanations,” in Proc. Int. Conf. Mach. Learn. (ICML), PMLR, vol. 202, pp. 39110–39125, 2023.

12. W. He, Z. Jiang, T. Xiao, Z. Xu, and Y. Li, “A survey on uncertainty quantification methods for deep learning,” arXiv preprint arXiv:2302.13425, 2023.

13. Z. Dong and F. Zhang, “Deep learning-based noise suppression and feature enhancement algorithm for LED medical imaging applications,” Journal of Science, Innovation and Social Impact, vol. 1, no. 1, pp. 9–18, 2025.

14. T. Xia, T. Dang, J. Han, L. Qendro, and C. Mascolo, “Uncertainty-aware health diagnostics via class-balanced evidential deep learning,” IEEE Journal of Biomedical and Health Informatics, vol. 28, no. 11, pp. 6417–6428, Nov. 2024, doi: 10.1109/JBHI.2024.3360002.

15. A. Adiga, G. Kaur, B. Hurt, L. Wang, P. Porebski, S. Venkatramanan, and M. Marathe, “Enhancing COVID-19 ensemble forecasting model performance using auxiliary data sources,” in Proc. 2022 IEEE Int. Conf. Big Data (Big Data), 2022, pp. 1594–1603, doi: 10.1109/BigData55660.2022.10020579.

16. Z. Dong and R. Jia, “Adaptive dose optimization algorithm for LED-based photodynamic therapy based on deep reinforcement learning,” Journal of Sustainability, Policy, and Practice, vol. 1, no. 3, pp. 144–155, 2025.

17. M. Muschalik, H. Baniecki, F. Fumagalli, P. Kolpaczki, B. Hammer, and E. Hüllermeier, “SHAP-IQ: Shapley interactions for machine learning,” in Advances in Neural Information Processing Systems, vol. 37, pp. 130324–130357, 2024, doi: 10.52202/079017-4141.

18. Z. Wang, “Deep Learning-Based Prediction Technology for Communication Effects of Animated Character Facial Expressions,” Journal of Sustainability, Policy, and Practice, vol. 1, no. 4, pp. 105–116, 2025.