Developing approaches to increase the robustness of machine learning models for detecting distributed denial of service attacks
Abstract
This paper analyzes and develops approaches to improve the resilience of machine learning models used to detect distributed denial of service attacks to adversarial attacks. To improve the resilience of machine learning models, the training set was augmented with relevant examples obtained using the GAN model generator. The 2019 DDoS Evaluation Dataset (CIC-DDoS2019) is used as a dataset. The dataset was developed by the Canadian Institute for Cybersecurity (CIC) and is intended for use in research and development in the field of DDoS attack detection and prevention. The data contains 128,027 attack sets and 97,718 normal requests. The abnormal requests contain various types of DDoS attacks, such as SYN flood, UDP flood, ICMP flood, and HTTP flood. Supervised learning is used. The transformation of sample feature values is implemented. XGBoost gradient boosting, which shows good results on standard metrics, is chosen as a model. An analysis of adversarial attacks was performed on the model trained on the original training dataset and on the model trained on an extended dataset supplemented with relevant examples generated using the GAN model over the training data. The generator was the G-part of the Wasserstein GAN network with a gradient penalty (WGAN-GP). To perform adversarial black box attacks, a modified ART library of the ZooAttack class was used. To preserve the semantics of malicious data, a modification of the library from IBM Adversarial Robustness Toolbox (ART) was performed. Standard quality metrics for models were used. The results obtained show that adding relevant examples to the training set increases the model's resistance to adversarial attacks. However, at 200 iterations, even a more robust model could not show quality comparable to the original test set, which suggests that with sufficient time and unlimited access to the model's inputs and outputs, it is possible to select adversarial examples on which the classifier will make mistakes.
Full Text:
PDF (Russian)References
M. S. Elsayed, N.-A. Le-Khac, S. Dev, and A. D. Jurcut, “DDoSNet: A deep-learning model for detecting network attacks,” in 2020 IEEE 21st International Symposium on” A World of Wireless, Mobile and Multimedia Networks”(WoWMoM). IEEE, 2020, pp. 391–396.
T. Fardusy, S. Afrin, I. J. Sraboni and U. K. Dey, "An Autoencoder-Based Approach for DDoS Attack Detection Using Semi-Supervised Learning," 2023 International Conference on Next-Generation Computing, IoT and Machine Learning (NCIM), Gazipur, Bangladesh, 2023, pp. 1-7, doi: 10.1109/NCIM59001.2023.10212626.
Costa, Joana C., et al. "How deep learning sees the world: A survey on adversarial attacks & defenses." IEEE Access 12 (2024): 61113-61136.
Q. Yan, M. Wang, W. Huang, X. Luo, and F. R. Yu, “Automatically synthesizing dos attack traces using generative adversarial networks,” International Journal of Machine Learning and Cybernetics, vol. 10, no. 12, pp. 3387–3396, 2019.
Abdelaty, Maged, et al. "Gadot: Gan-based adversarial training for robust ddos attack detection." 2021 IEEE Conference on Communications and Network Security (CNS). IEEE, 2021.
Kaggle https://www.kaggle.com/datasets/aymenabb/ddos-evaluation-dataset-cic-ddos2019 Retrieved: 05.05.2025
I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” in Advances in neural information processing systems, 2017, pp. 5767–5777.
A. Alsirhani, S. Sampalli, P. Bodorik, “DDoS detection system: utilizing gradient boosting algorithm and apache spark”, in: Proceedings of the”IEEE Canadian Conference on Electrical and Computer Engineering (CCECE), 2018 (pp. 1-6). IEEE.
Apostol Vassilev, Alina Oprea, Alie Fordyce, Hyrum Anderson “Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations” NIST Trustworthy and Responsible AI NIST AI 100-2e2023 https://doi.org/10.6028/NIST.AI.100-2e2023
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. In International Conference on Learning Representations, 2014.
Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic ́, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. “Evasion attacks against machine learning at test time.” In Joint European conference on machine learning and knowledge discovery in databases, pages 387–402. Springer, 2013.
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. “Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training” substitute models. In Proceedings of the 10th ACM Workshop on Artif- cial Intelligence and Security, AISec ’17, page 15–26, New York, NY, USA, 2017. Association for Computing Machinery.
Seungyong Moon, Gaon An, and Hyun Oh Song. “Parsimonious black-box adversarial attacks via effcient combinatorial optimization”. In International Conference on Machine Learning (ICML), 2019.
Satya Narayan Shukla, Anit Kumar Sahu, Devin Willmott, and Zico Kolter. “Simple and effcient hard label black-box adversarial attacks in low query budget regimes.” In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, KDD ’21, page 1461–1469, New York, NY, USA, 2021. Association for Computing Machinery.
Papernot, Nicolas, Patrick McDaniel, and Ian Goodfellow. "Transferability in machine learning: from phenomena to black-box attacks using adversarial samples." arXiv preprint arXiv:1605.07277 (2016).
Jeremy Cohen, Elan Rosenfeld, and Zico Kolter. “Certifed adversarial robustness via randomized smoothing.” In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 1310–1320. PMLR, 09–15 Jun 2019.
Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. “AI2: Safety and robustness certifcation of neural networks with abstract interpretation.” In 2018 IEEE Symposium on Security and Privacy (S&P), pages 3–18, 2018.
Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. “Black-box adver- sarial attacks with limited queries and information. In Jennifer G. Dy and An- dreas Krause, editors”, Proceedings of the 35th International Conference on Ma- chine Learning, ICML 2018, Stockholmsma ̈ssan, Stockholm, Sweden, July 10-15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 2142–2151. PMLR, 2018.
Nina Narodytska and Shiva Kasiviswanathan. “Simple black-box adversarial attacks on deep neural networks.” In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 1310–1318, 2017.
Suhomlin, Vladimir Aleksandrovich, et al. "Model' cifrovyh navykov kiberbezopasnosti 2020." Sovremennye informacionnye tehnologii i IT-obrazovanie 16.3 (2020): 695-710.
Yudova, E. A., and Olga R. Laponina. "Analysis of the possibilities of using machine learning technologies to detect attacks on web applications." International Journal of Open Information Technologies 10.1 (2021): 61-68.
Korniukhina, Sofia P., and Olga R. Laponina. "Research of the Capabilities of Deep Learning Algorithms to Protection Against Phishing Attacks." International Journal of Open Information Technologies 11.6 (2023): 163-174.
Iskusstvennyj intellekt kak strategicheskij instrument jekonomicheskogo razvitija strany i sovershenstvovanija ee gosudarstvennogo upravlenija. Chast' 2. Perspektivy primenenija iskusstvennogo intellekta v Rossii dlja gosudarstvennogo upravlenija / I. A. Sokolov, V. I. Drozhzhinov, A. N. Rajkov [i dr.] // International Journal of Open Information Technologies. – 2017. – T. 5, # 9. – S. 76-101. – EDN ZEQDMT.
Razvitie transportno-logisticheskih otraslej Evropejskogo Sojuza: otkrytyj BIM, Internet Veshhej i kiber-fizicheskie sistemy / V. P. Kuprijanovskij, V. V. Alen'kov, A. V. Stepanenko [i dr.] // International Journal of Open Information Technologies. – 2018. – T. 6, # 2. – S. 54-100. – EDN YNIRFG
Refbacks
- There are currently no refbacks.
Abava Кибербезопасность ИБП для ЦОД СНЭ
ISSN: 2307-8162