Amplified Gradient Inversion Attacks on Federated Learning Frameworks

Authors

  • Tran Anh Tu
  • Dinh Cong Thanh Academy of Cryptography Techniques
  • Tran Duc Su Posts and Telecommunications Institute of Technology

DOI:

https://doi.org/10.54654/isj.v3i23.1066

Keywords:

Federated learning, inversion attacks, differential privacy, homomorphic encryption

Tóm tắt

Federated Learning (FL) facilitates collaborative model training while safeguarding data privacy, making it ideal for sensitive fields such as finance, education, and healthcare. Despite its promise, FL remains vulnerable to privacy breaches, particularly through gradient inversion attacks that can reconstruct private data from shared model updates. This research introduces a nonlinear amplification strategy that enhances the potency of such attacks, revealing heightened risks of data leakage in FL environments. Additionally, we evaluate the resilience of privacy-preserving mechanisms, such as Differential Privacy (DP) and Homomorphic Encryption (HE), by employing two proposed metrics AvgSSIM and AvgMSE to measure both the severity of attacks and the efficacy of defenses.

Downloads

Download data is not yet available.

References

Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang, “Deep learning with differential privacy”, Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, pp. 308–318, 2016.

Alham Fikri Aji and Kenneth Heafield, “Sparse communication for distributed gradient descent”, arXiv preprint arXiv:1704.05021, 2017.

Yoshinori Aono, Takuya Hayashi, Lihua Wang, Shiho Moriai, et al, “Privacy-preserving deep learning via additively homomorphic encryption”, IEEE transactions on information forensics and security, 13(5):1333–1345, 2017.

Jeremy Bernstein, Yu-Xiang Wang, Kamyar Azizzadenesheli, and Animashree Anandkumar, “Compressed optimisation for non-convex problems”, International Conference on Machine Learning, pp. 560–569. PMLR, 2018.

Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth, “Practical secure aggregation for federated learning on user-held data”, arXiv preprint arXiv:1611.04482, 2016.

Si Chen, Mostafa Kahla, Ruoxi Jia, and GuoJun Qi, “Knowledge-enriched distributional model inversion attacks”, Proceedings of the IEEE/CVF international conference on computer vision,

pp. 16178–16187, 2021.

Jinhao Duan, Fei Kong, Shiqi Wang, Xiaoshuang Shi, and Kaidi Xu, “Are diffusion models vulnerable to membership inference attacks?”, International Conference on Machine Learning, pp. 8717–8730. PMLR, 2023.

Matt Fredrikson, Somesh Jha, and Thomas Ristenpart, “Model inversion attacks that

exploit confidence information and basic countermeasures”, Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, pp. 1322– 1333, 2015.

Jonas Geiping, Hartmut Bauermeister, Hannah Droge, and Michael Moeller, “Inverting gradientshow easy is it to break privacy in federated learning?”, Advances in neural information processing systems, pp. 33:16937-16947, 2020.

Briland Hitaj, Giuseppe Ateniese, and Fernando Perez-Cruz, “Deep models under the gan: information leakage from collaborative deep learning”, Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, pp. 603-618, 2017.

Jinwoo Jeon, Kangwook Lee, Sewoong Oh, Jungseul Ok, et al, “Gradient inversion with generative image prior”, Advances in neural information processing systems, pp. 34:29898–29908, 2021.

Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas, “Communication-efficient learning of deep networks from decentralized data”, Artificial intelligence and statistics, pp. 1273–1282. PMLR, 2017.

Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, and Yang Zhang, {UpdatesLeak}: “Data set inference and reconstruction attacks in online learning”, 29th USENIX security symposium (USENIX Security 20), pp. 1291–1308, 2020.

T. A. Tu, L. T. Dung, and P. X. Sang, “A novel privacy-preserving federated learning model based on secure multi-party computation”, International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making, pp. 321–333. Springer, 2023.

Ziqi Yang, Jiyi Zhang, Ee-Chien Chang, and Zhenkai Liang, “Neural network inversion in adversarial setting via background knowledge alignment”, Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 225–240, 2019.

Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M Alvarez, Jan Kautz, and Pavlo Molchanov, “See through gradients: Image batch recovery via gradinversion”, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 16337–16346, 2021.

Yuheng Zhang, Ruoxi Jia, Hengzhi Pei, Wenxiao Wang, Bo Li, and Dawn Song, “The secret revealer: Generative model-inversion attacks against deep neural networks”, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 253-261, 2020.

Zeping Zhang, Xiaowen Wang, Jie Huang, and Shuaishuai Zhang, “Analysis and utilization of hidden information in model inversion attacks”, IEEE Transactions on Information Forensics and Security, vol. 18, pp.4449-4462, 2023.

Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen, “idlg: Improved deep leakage from gradients”, arXiv preprint arXiv:2001.02610, 2020.

Ligeng Zhu, Zhijian Liu, and Song Han, “Deep leakage from gradients”, Advances in neural information processing systems, no. 1323, pp. 14774 - 14784, 2019.

Downloads

Abstract views: 1104 / PDF downloads: 24

Published

2024-12-19

How to Cite

Tu, T. A., Thanh, D. C., & Su, T. D. (2024). Amplified Gradient Inversion Attacks on Federated Learning Frameworks. Journal of Science and Technology on Information Security, 3(23), 15-26. https://doi.org/10.54654/isj.v3i23.1066

Issue

Section

Papers