Enhancing Transparency in Asset-Backed Securities: A Deep Learning Approach for Automated Risk Assessment and Regulatory Compliance

Main Article Content

Jiahui Han

Abstract

The 2008 financial crisis exposed critical transparency deficiencies in asset-backed securities markets, prompting regulatory reforms mandating asset-level disclosure. This research develops an automated risk assessment framework combining deep neural networks with SHAP explainability techniques to address the regulatory technology gap in processing large-scale securitization data. The framework processes Schedule AL disclosures from SEC Electronic Data Gathering, Analysis, and Retrieval (EDGAR) filings, extracting loan-level and pool-level features to predict default risk while providing interpretable explanations for each assessment. Empirical validation on 450,382 mortgages from 50 residential mortgage-backed securities transactions demonstrates superior performance with an AUC-ROC of 0.883, outperforming XGBoost by 2.7 percentage points while maintaining complete transparency through feature attribution. Case studies illustrate the practical applications of detecting underwriting quality deterioration and geographic risk concentration, thereby supporting regulatory compliance monitoring and investor protection objectives.

Article Details

Section

Articles

How to Cite

Enhancing Transparency in Asset-Backed Securities: A Deep Learning Approach for Automated Risk Assessment and Regulatory Compliance. (2026). Journal of Science, Innovation & Social Impact, 2(1), 1-17. https://sagespress.com/index.php/JSISI/article/view/78

References

1. N. Bussmann, P. Giudici, D. Marinelli, and J. Papenbrock, "Explainable machine learning in credit risk management," Computational Economics, vol. 57, no. 1, pp. 203-216, 2021. doi: 10.1007/s10614-020-10042-0

2. B. H. Misheva, J. Osterrieder, A. Hirsa, O. Kulkarni, and S. F. Lin, "Explainable AI in credit risk management," arXiv preprint arXiv:2103.00949, 2021.

3. Z. Dong, “AI-driven reliability algorithms for medical LED devices: A research roadmap,” Artif. Intell. Mach. Learn. Rev., vol. 5, no. 2, pp. 54–63, 2024.

4. S. Fritz-Morgenthal, B. Hein, and J. Papenbrock, "Financial risk management and explainable, trustworthy, responsible AI," Frontiers in artificial intelligence, vol. 5, p. 779799, 2022. doi: 10.3389/frai.2022.779799

5. G. Chakkappan, A. Morshed, and M. M. Rashid, "Explainable AI and Big Data Analytics for Data Security Risk and Privacy Issues in the Financial Industry," In 2024 IEEE Conference on Engineering Informatics (ICEI), November, 2024, pp. 1-9. doi: 10.1109/icei64305.2024.10912422

6. N. Bussmann, P. Giudici, D. Marinelli, and J. Papenbrock, "Explainable AI in fintech risk management," Frontiers in Artificial Intelligence, vol. 3, p. 26, 2020. doi: 10.3389/frai.2020.00026

7. J. S. Kadyan, M. Sharma, S. Kadyan, S. Gupta, N. K. Hamid, and B. K. Bala, "Explainable AI with Capsule Networks for Credit Risk Assessment in Financial Systems," In 2025 International Conference on Next Generation Information System Engineering (NGISE), March, 2025, pp. 1-6.

8. M. Rakshitha, and V. K. MU, "A Study on Application of Explainable AI for Credit Risk Management of an Individual," In 2024 8th International Conference on Computational System and Information Technology for Sustainable Solutions (CSITSS), November, 2024, pp. 1-7.

9. K. D. Hartomo, C. Arthur, and Y. Nataliani, "A novel weighted loss tabtransformer integrating explainable ai for imbalanced credit risk datasets," IEEE Access, 2025.

10. P. T. Vi, and V. M. Phuc, "Credit Risk Prediction in Vietnamese Commercial Banks With an Explainable AI Framework Using XGBoost," In Navigating Computing Challenges for a Sustainable World, 2025, pp. 193-204. doi: 10.4018/979-8-3373-0462-5.ch012

11. A. I. Akkalkot, N. Kulshrestha, G. Sharma, K. S. Sidhu, and S. S. Palimkar, "Challenges and Opportunities in Deploying Explainable AI for Financial Risk Assessment," In 2025 International Conference on Pervasive Computational Technologies (ICPCT), February, 2025, pp. 382-386.

12. Z. Dong and F. Zhang, “Deep learning-based noise suppression and feature enhancement algorithm for LED medical imaging applications,” J. Sci., Innov. Soc. Impact, vol. 1, no. 1, pp. 9–18, 2025.

13. P. Murthy, S. Gaur, T. Jolly, G. Sharma, and R. Rathore, "Integrated Explainable AI for Financial Risk Management: A Systematic Approach," In 2025 IEEE International Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI), March, 2025, pp. 1-6. doi: 10.1109/iatmsi64286.2025.10984539

14. R. Srikanteswara, K. Naghera, S. B. Kukkaje, and A. Kumar, "Credit Risk Assessment using Ensemble Models and Explainable AI," In 2025 3rd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), February, 2025, pp. 1505-1511. doi: 10.1109/idciot64235.2025.10914916

15. P. E. De Lange, B. Melsom, C. B. Vennerød, and S. Westgaard, "Explainable AI for credit assessment in banks," Journal of Risk and Financial Management, vol. 15, no. 12, p. 556, 2022.

16. S. Sikha, and A. Vijayakumar, "Explainable AI Using h2o AutoML and Robustness Check in Credit Risk Management," In 2023 Intelligent Computing and Control for Engineering and Business Systems (ICCEBS), December, 2023, pp. 1-5.

17. H. Gonaygunta, M. H. Maturi, A. R. Yadulla, R. K. Ravindran, E. De La Cruz, G. S. Nadella, and K. Meduri, "Utilizing Explainable AI in Financial Risk Assessment: Enhancing User Empowerment through Interpretable Credit Scoring Models," In 2025 Systems and Information Engineering Design Symposium (SIEDS), May, 2025, pp. 444-449.