Privacy-Preserving Feature Attribution Explanations for Large-Scale Recommendation Systems: A Differential Privacy Approach
Main Article Content
Abstract
Modern recommendation systems increasingly demand explainable predictions while simultaneously protecting user privacy. Existing feature attribution methods for recommender systems often expose sensitive user information through detailed explanations, creating significant privacy risks. This paper presents a comprehensive privacy-preserving feature attribution framework specifically designed for large-scale recommendation systems. Our approach integrates differential privacy mechanisms with gradient-based feature attribution techniques, enabling transparent recommendations while maintaining strict privacy guarantees. The framework employs adaptive noise injection, dynamic privacy budget allocation, and multi-level transparency controls to balance explanation quality with privacy protection. We introduce novel concentrated differential privacy composition bounds optimized for sequential attribution queries and auto-mated compliance verification mechanisms. Extensive experiments on MovieLens, Amazon, and Yelp datasets demonstrate that our framework maintains reasonable recommendation accuracy while providing meaningful explanations under strong privacy constraints. The proposed approach achieves privacy-utility trade-offs with recommendation accuracy degradation of 8-15% while ensuring ε-differential privacy with ε ≤ 1.0, representing a significant improvement over existing privacy-preserving explanation method.
Article Details
Section
How to Cite
References
1. M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, "Deep learning with differential pri-vacy," In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, October, 2016, pp. 308-318, doi: 10.1145/2976749.2978318
2. D. Afchar, A. Melchiorre, M. Schedl, R. Hennequin, E. Epure, and M. Moussallam, "Explainability in music recommender sys-tems," AI Magazine, vol. 43, no. 2, pp. 190-208, 2022.
3. C. Balsells-Rodas, F. Yang, Z. Huang, and Y. Gao, "Explainable Uncertainty Attribution for Sequential Recommendation," In Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval, July, 2024, pp. 2401-2405, doi: 10.1145/3626772.3657900
4. A. Blanco-Justicia, D. Sánchez, J. Domingo-Ferrer, and K. Muralidhar, "A critical review on the use (and misuse) of differential privacy in machine learning," ACM Computing Surveys, vol. 55, no. 8, pp. 1-16, 2022, doi: 10.1145/3547139
5. M. Bouni, B. Hssina, K. Douzi, and S. Douzi, "Interpretable machine learning techniques for an advanced crop recommenda-tion model," Journal of Electrical and Computer Engineering, vol. 2024, no. 1, p. 7405217, 2024, doi: 10.1155/2024/7405217
6. Z. Ji, Z. C. Lipton, and C. Elkan, "Differential privacy and machine learning: a survey and review," arXiv preprint arXiv:1412.7584, 2014.
7. B. Kim, "Interactive and interpretable machine learning models for human machine collaboration (Doctoral dissertation, Massachusetts Institute of Technology)," 2015.
8. H. Liu, L. Jing, J. Wen, P. Xu, J. Wang, J. Yu, and M. K. Ng, "Interpretable deep generative recommendation models," Journal of Machine Learning Research, vol. 22, no. 202, pp. 1-54, 2021.
9. H. Liu, J. Wen, L. Jing, J. Yu, X. Zhang, and M. Zhang, "In2Rec: Influence-based interpretable recommendation," In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, November, 2019, pp. 1803-1812.
10. N. Liu, Y. Ge, L. Li, X. Hu, R. Chen, and S. H. Choi, "Explainable recommender systems via resolving learning representa-tions," In Proceedings of the 29th ACM international conference on information & knowledge management, October, 2020, pp. 895-904.
11. W. Liu, and Y. Wang, "Evaluating trust in recommender systems: A user study on the impacts of explanations, agency attrib-ution, and product types," International Journal of Human-Computer Interaction, vol. 41, no. 2, pp. 1280-1292, 2025, doi: 10.1080/10447318.2024.2313921
12. N. Maneechote, and S. Maneeroj, "Explainable recommendation via personalized features on dynamic preference interac-tions," IEEE Access, vol. 10, pp. 116326-116343, 2022, doi: 10.1109/access.2022.3219076
13. J. X. Mi, A. D. Li, and L. F. Zhou, "Review study of interpretation methods for future interpretable machine learning," IEEE Access, vol. 8, pp. 191969-191985, 2020.
14. N. Ponomareva, H. Hazimeh, A. Kurakin, Z. Xu, C. Denison, H. B. McMahan, and A. G. Thakurta, "How to dp-fy ml: A prac-tical guide to machine learning with differential privacy," Journal of Artificial Intelligence Research, vol. 77, pp. 1113-1201, 2023, doi: 10.1613/jair.1.14649.
15. A. D. Sarwate, and K. Chaudhuri, "Signal processing and machine learning with differential privacy: Algorithms and chal-lenges for continuous data," IEEE signal processing magazine, vol. 30, no. 5, pp. 86-94, 2013, doi: 10.1109/msp.2013.2259911.
16. A. Triastcyn, and B. Faltings, "Bayesian differential privacy for machine learning," In International Conference on Machine Learning, November, 2020, pp. 9583-9592.
17. S. Vijayaraghavan, and P. Mohapatra, "Stability of explainable recommendation," In Proceedings of the 17th ACM Conference on Recommender Systems, September, 2023, pp. 947-954, doi: 10.1145/3604915.3608853.
18. N. Wu, F. Farokhi, D. Smith, and M. A. Kaafar, "The value of collaboration in convex machine learning with differential pri-vacy," In 2020 IEEE Symposium on Security and Privacy (SP), May, 2020, pp. 304-317, doi: 10.1109/sp40000.2020.00025.
19. Y. Wu, L. Zhang, U. A. Bhatti, and M. Huang, "Interpretable machine learning for personalized medical recommendations: A LIME-based approach," Diagnostics, vol. 13, no. 16, p. 2681, 2023. doi: 10.3390/diagnostics13162681.
20. G. Xu, T. D. Duong, Q. Li, S. Liu, and X. Wang, "Causality learning: A new perspective for interpretable machine learning," arXiv preprint arXiv:2006.16789, 2020.
21. T. Zhang, T. Zhu, P. Xiong, H. Huo, Z. Tari, and W. Zhou, "Correlated differential privacy: Feature selection in machine learning," IEEE Transactions on Industrial Informatics, vol. 16, no. 3, pp. 2115-2124, 2019, doi: 10.1109/tii.2019.2936825.