Proposed method for against adversarial images based on ResNet architecture

Authors

  • Truong Phi Ho Vietnam Academy of Cryptography Techniques
  • Pham Duy Trung
  • Dang Vu Hung
  • Nguyen Nhat Hai

DOI:

https://doi.org/10.54654/isj.v2i22.1036

Keywords:

deep learning, ResNet architecture, adversarial attack, adversarial training

Tóm tắt

Our world is becoming increasingly automated due to the application of deep learning/machine learning models to systems, but these systems are vulnerable to adversarial attacks, which create deceptive data to trick them. Without proper defenses, attackers can exploit deep learning systems in facial recognition, self-driving cars, and social media filters. Research on adversarial image generation and methods to against attacks is important. This paper proposes employing the ResNet architecture with adversarial training to against adversarial images. The model is tested on a Hybrid CIFAR-10 dataset, which is designed to improve robustness and accuracy by incorporating GAN-generated images. The proposed model achieves an accuracy of over 95%, which is better than three other state-of-the-art architectures VGG19_bn, ShuffleNetV2, and RepVGG_a2.

Downloads

Download data is not yet available.

References

S.-C. Huang, A. Pareek, M. Jensen, M. P. Lungren, S. Yeung, and A. S. Chaudhari, “Self-supervised learning for medical image classification: a systematic review and implementation guidelines,” NPJ Digital Medicine, vol. 6, no. 1, p. 74, 2023.

X. Lu, S. Li, and M. Fujimoto, “Automatic speech recognition,” Speech-to-speech translation, pp. 21-38, 2020.

V. M. Tuan, N. X. Thang, and T. Q. Anh, “Evaluating the efficiency of Vietnamese sms spam detection techniques,” Journal of Science and Technology on Information security, pp. 30–37, 2023.

H. Wang, J. Polden, J. Jirgens, Z. Yu, and Z. Pan, “Automatic rebar counting using image processing and machine learning,” in 2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). IEEE, 2019, pp. 900-904.

E. Yurtsever, J. Lambert, A. Carballo, and K. Takeda, “A survey of autonomous driving: Common practices and emerging technologies,” IEEE access, vol. 8, pp. 58 443-58 469, 2020.

S. Mishra and A. R. Tripathi, “Ai business model: an integrative business approach,” Journal of Innovation and Entrepreneurship, vol. 10, no. 1, p. 18, 2021.

B. Marr, Artificial intelligence in practice: how 50 successful companies used AI and machine learning to solve problems. John Wiley & Sons, 2019.

S. Hussain, P. Neekhara, M. Jere, F. Koushanfar, and J. McAuley, “Adversarial deepfakes: Evaluating vulnerability of deepfake detectors to adversarial examples,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2021, pp. 3348-3357.

X. Yuan, P. He, Q. Zhu, and X. Li, “Adversarial examples: Attacks and defenses for deep learning,” IEEE transactions on neural networks and learning systems, vol. 30, no. 9, pp. 2805-2824, 2019.

J. Zhang and C. Li, “Adversarial examples: Opportunities and challenges,” IEEE transactions on neural networks and learning systems, vol. 31, no. 7, pp. 2578-2593, 2019.

E. Nowroozi, A. Dehghantanha, R. M. Parizi, and K.-K. R. Choo, “A survey of machine learning techniques in adversarial image forensics,” Computers & Security, vol. 100, p. 102092, 2021.

T. P. Ho, P. D. Trung, and B. T. Lam, “A novel generalized adversarial image method using descriptive features,” Journal of Science and Technology on Information security, pp. 63-76, 2023.

S. Zheng, Y. Song, T. Leung, and I. Goodfellow, “Improving the robustness of deep neural networks via stability training,” in Proceedings of the ieee conference on computer vision and pattern recognition, 2016, pp. 4480-4488.

S.-Y. Wang, O. Wang, R. Zhang, A. Owens, and A. A. Efros, “Cnn-generated images are surprisingly easy to spot... for now,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 8695-8704.

J. Lu, T. Issaranon, and D. Forsyth, “Safetynet: Detecting and rejecting adversarial examples robustly,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 446-454.

Z. Gong and W. Wang, “Adversarial and clean data are not twins,” in Proceedings of the Sixth International Workshop on Exploiting Artificial Intelligence Techniques for Data Management, 2023, pp. 1-5.

I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.

R. Huang, B. Xu, D. Schuurmans, and C. Szepesvári, “Learning with a strong adversary,” arXiv preprint arXiv:1511.03034, 2015.

Y. Wu, D. Bamman, and S. Russell, “Adversarial training for relation extraction,” in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2017, pp. 1778-1783.

A. Baldominos, Y. Saez, and P. Isasi, “A survey of handwritten character recognition with mnist and emnist,” Applied Sciences, vol. 9, no. 15, p. 3169, 2019.

A. Mkadry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” stat, vol. 1050, no. 9, 2017.

H. Zheng, Z. Zhang, J. Gu, H. Lee, and A. Prakash, “Efficient adversarial training with transferable adversarial examples,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1181-1190.

H. Wang, A. Zhang, S. Zheng, X. Shi, M. Li, and Z. Wang, “Removing batch normalization boosts adversarial training,” in International Conference on Machine Learning. PMLR, 2022, pp. 23 433–23 445.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.

H. Zhang, Y. Yu, J. Jiao, E. Xing, L. El Ghaoui, and M. Jordan, “Theoretically principled trade-off between robustness and accuracy,” in International conference on machine learning. PMLR, 2019, pp. 7472-7482.

T.-H. Wu, H.-T. Su, S.-T. Chen, and W. H. Hsu, “Revisiting semi-supervised adversarial robustness via noise-aware online robust distillation,” arXiv preprint arXiv:2409.12946, 2024.

D. T. Pham, C. T. Nguyen, P. H. Truong, and N. H. Nguyen, “Automated generation of adaptive perturbed images based on gan for motivated adversaries on deep learning models,” in Proceedings of the 12th International Symposium on Information and Communication Technology, 2023, pp. 808-815.

Z. Zhuang, M. Liu, A. Cutkosky, and F. Orabona, “Understanding adamw through proximal methods and scale-freeness,” Transactions on machine learning research, 2022.

M. Reyad, A. M. Sarhan, and M. Arafa, “A modified adam algorithm for deep neural network optimization,” Neural Computing and Applications, vol. 35, no. 23, pp. 17 095-17 112, 2023.

C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013.

C. Xiao, B. Li, J.-Y. Zhu, W. He, M. Liu, and D. Song, “Generating adversarial examples with adversarial networks,” arXiv preprint arXiv:1801.02610, 2018.

D. Hendrycks, K. Zhao, S. Basart, J. Steinhardt, and D. Song, “Natural adversarial examples,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 15 262–15 271.

M. M. Naseer, S. H. Khan, M. H. Khan, F. Shahbaz Khan, and F. Porikli, “Crossdomain transferability of adversarial perturbations,” Advances in Neural Information Processing Systems, vol. 32, 2019.

G. Jin, S. Shen, D. Zhang, F. Dai, and Y. Zhang, “Ape-gan: Adversarial perturbation elimination with gan,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 3842-3846.

T. Bai, J. Zhao, J. Zhu, S. Han, J. Chen, B. Li, and A. Kot, “Ai-gan: Attack-inspired generation of adversarial examples,” in 2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021, pp. 2543-2547.

A. Krizhevsky, G. Hinton et al., “Learning multiple layers of features from tiny images,” 2009.

Y. Abouelnaga, O. S. Ali, H. Rady, and M. Moustafa, “Cifar-10: Knn-based ensemble of classifiers,” in 2016 International Conference on Computational Science and Computational Intelligence (CSCI). IEEE, 2016, pp. 1192–1195.

M. Shaha and M. Pawar, “Transfer learning for image classification,” in 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), 2018, pp. 656-660.

Y. Martindez-Diaz, L. S. Luevano, H. Mendez-Vazquez, M. Nicolas-Diaz, L. Chang, and M. Gonzalez-Mendoza, “Shufflefacenet: A lightweight face architecture for efficient and highly-accurate face recognition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019, pp. 0–0.

X. Ding, X. Zhang, N. Ma, J. Han, G. Ding, and J. Sun, “Repvgg: Making vgg-style convnets great again,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 13 733-13 742.

S. Kakarwal and P. Paithane, “Automatic pancreas segmentation using resnet-18 deep learning approach,” System research and information technologies, no. 2, pp. 104–116, 2022.

B. Koonce and B. Koonce, “Resnet 34,” Convolutional Neural Networks with Swift for Tensorflow: Image Recognition and Dataset Categorization, pp. 51–61, 2021.

A. Demir, F. Yilmaz, and O. Kose, “Early detection of skin cancer using deep learning architectures: resnet-101 and inception-v3,” in 2019 medical technologies congress (TIPTEKNO). IEEE, 2019, pp. 1-4.

A. Veit, M. J. Wilber, and S. Belongie, “Residual networks behave like ensembles of relatively shallow networks,” Advances in neural information processing systems, vol. 29, 2016.

J. Bjorck, K. Q. Weinberger, and C. Gomes, “Understanding decoupled and early weight decay,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 8, 2021, pp. 6777–6785.

S. Aronoff et al., “Classification accuracy: a user approach,” Photogrammetric Engineering and Remote Sensing, vol. 48, no. 8, pp. 1299-1307, 1982.

Q. Wang, Y. Ma, K. Zhao, and Y. Tian, “A comprehensive survey of loss functions in machine learning,” Annals of Data Science, pp. 1–26, 2020.

Z. Cai, X. Qiao, J. Zhang, Y. Feng, X. Hu, and N. Jiang, “Repvgg-simam: An efficient bad image classification method based on repvgg with simple parameter-free attention module,” Applied Sciences, vol. 13, no. 21, p. 11925, 2023.

Downloads

Abstract views: 103 / PDF downloads: 52

Published

2024-10-01

How to Cite

Ho, T. P., Trung, P. D., Hung, D. V., & Hai, N. N. . (2024). Proposed method for against adversarial images based on ResNet architecture . Journal of Science and Technology on Information Security, 2(22), 69-82. https://doi.org/10.54654/isj.v2i22.1036

Issue

Section

Papers