DMAPDQV: Design of an effective Pre-emption Model to Detect & Mitigate Adversarial Attacks using Deep Dyna Q with VARMAx

Authors

  • Chetan Patil Department of Computer Science Engineering, Madhyanchal Professional University, Bhopal, India. Author https://orcid.org/0009-0008-8220-9022
  • Mohd Zuber Department of Computer Science Engineering, Madhyanchal Professional University, Bhopal, India. Author

DOI:

https://doi.org/10.54392/irjmt2526

Keywords:

Deep Dyna Q Learning, VARMAx Operations, Adversarial Attack Pre-emption, GAN-Based Sample Generation, Real-Time Network Security, Scenarios

Abstract

The increasing sophistication of adversarial attacks in machine learning systems requires advanced detection and mitigation strategies. Existing models fail to identify such attacks in a timely and accurate manner, mainly because of their inability to adapt to various conditions and lack of predictive capabilities. To address these shortcomings, this paper introduces an innovative pre-emption model based on the capabilities of Deep Dyna Q Learning combined with VARMAx Operations. The Deep Dyna Q Learning framework is very efficient for finding the salient performance indicators accuracy, confidence, training loss, prediction performance, anomaly occurrence, and explainability performance. An advanced GridCAM++ process tracks these performance indicators while providing insights regarding their interconnection, thus a basis for formulating a well-performing model process of prediction for adversarial attacks. Further enhancing the model's resilience, VARMAx Operations are employed to integrate both endogenous variables (derived from the system's performance metrics) and exogenous variables (representing noise or external factors), providing a comprehensive view for preempting adversarial attacks. This combination enhances predictive accuracy and also allows for preempting possible security breaches. The model uses GAN-based sample generation along with a digital twin framework that significantly increases classification performance by properly cross-comparing and eliminating attack vectors for different scenarios. The empirical tests performed on real-time networks show the superiority of this model over the existing methods. Notably, it achieved an increase of 8.5% precision in adversarial attack pre-emption, accuracy improved up to 8.3%, higher recall rate by up to 7.5%, reduced delay by 4.9%, and AUC rose by 10.4%, and specificity also enhanced by 9.4% in process.

References

A. Guesmi, M.A. Hanif, B. Ouni, M. Shafique, Physical Adversarial Attacks for Camera-Based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook. IEEE Access, 11, (2023) 109617-109668. https://doi.org/10.1109/ACCESS.2023.3321118

R. Huang, Y. Li, Adversarial Attack Mitigation Strategy for Machine Learning-Based Network Attack Detection Model in Power System. IEEE Transactions on Smart Grid, 14(3), (2023) 2367-2376. https://doi.org/10.1109/TSG.2022.3217060

W. Feng, Xu, N., Zhang, T., Wu, B., & Zhang, Y. Robust and generalized physical adversarial attacks via meta-GAN. IEEE Transactions on Information Forensics and Security, 19, (2023) 1112-1125. https://doi.org/10.1109/TIFS.2023.3288426

S. He, R. Wang, T. Liu, C. Yi, X. Jin, R. Liu, W. Zhou, Type-I generative adversarial attack. IEEE Transactions on Dependable and Secure Computing, 20(3), (2022) 2593-2606. https://doi.org/10.1109/TDSC.2022.3186918

S. Zhao, W. Wang, Z. Du, J. Chen, Z. Duan, A Black-Box Adversarial Attack Method via Nesterov Accelerated Gradient and Rewiring Towards Attacking Graph Neural Networks. IEEE Transactions on Big Data, 9(6), (2023) 1586-1597. https://doi.org/10.1109/TBDATA.2023.3296936

F. He, Y. Chen, R. Chen, W. Nie, Point Cloud Adversarial Perturbation Generation for Adversarial Attacks. IEEE Access, 11, (2023) 2767-2774. https://doi.org/10.1109/ACCESS.2023.3234313

Y. Wang, T. Sun, S. Li, X. Yuan, W. Ni, E. Hossain, H.V. Poor, Adversarial attacks and defenses in machine learning-empowered communication systems and networks: A contemporary survey. IEEE Communications Surveys & Tutorials, 25(4), (2023) 2245-2298. https://doi.org/10.1109/COMST.2023.3319492

S.M.K.A. Kazmi, N. Aafaq, M.A. Khan, M. Khalil, A. Saleem, From Pixel to Peril: Investigating Adversarial Attacks on Aerial Imagery Through Comprehensive Review and Prospective Trajectories. IEEE Access, 11, (2023) 81256-81278. https://doi.org/10.1109/ACCESS.2023.3299878

Y. Shi, Y. Han, Q. Hu, Y. Yang, Q. Tian, Query-Efficient Black-Box Adversarial Attack With Customized Iteration and Sampling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(2), (2023) 2226-2245. https://doi.org/10.1109/TPAMI.2022.3169802

C. Shi, M. Zhang, Z. Lv, Q. Miao, C.M. Pun, Universal Object-Level Adversarial Attack in Hyperspectral Image Classification. IEEE Transactions on Geoscience and Remote Sensing, 61, (2023) 1-14. https://doi.org/10.1109/TGRS.2023.3336734

W. Jiang, H. Li, G. Xu, T. Zhang, R. Lu, Physical Black-Box Adversarial Attacks Through Transformations. IEEE Transactions on Big Data, 9(3), (2023) 964-974. https://doi.org/10.1109/TBDATA.2022.3227318

K. Mo, W. Tang, J. Li, X. Yuan, Attacking Deep Reinforcement Learning With Decoupled Adversarial Policy. IEEE Transactions on Dependable and Secure Computing, 20(1), (2023) 758-768. https://doi.org/10.1109/TDSC.2022.3143566

L. Sun, Y. Dou, C. Yang, K. Zhang, J. Wang, S.Y. Philip, L. He, B. Li, Adversarial attack and defense on graph data: A survey. IEEE Transactions on Knowledge and Data Engineering, 35(8), (2022) 7693-7711. https://doi.org/10.1109/TKDE.2022.3201243

L. Nguyen Vu, T.P. Doan, M. Bui, K. Hong, S. Jung, On the Defense of Spoofing Countermeasures Against Adversarial Attacks. IEEE Access, 11, (2023) 94563-94574. https://doi.org/10.1109/ACCESS.2023.3310809

C. Wan, F. Huang and X. Zhao, Average Gradient-Based Adversarial Attack. IEEE Transactions on Multimedia, 25, (2023) 9572-9585. https://doi.org/10.1109/TMM.2023.3255742

H. Teryak, A. Albaseer, M. Abdallah, S. Al-Kuwari, M. Qaraqe, Double-Edged Defense: Thwarting Cyber Attacks and Adversarial Machine Learning in IEC 60870-5-104 Smart Grids. IEEE Open Journal of the Industrial Electronics Society, 4, (2023) 629-642. https://doi.org/10.1109/OJIES.2023.3336234

C. Qin, Y. Chen, K. Chen, X. Dong, W. Zhang, X. Mao, Y. He, N. Yu, Feature fusion based adversarial example detection against second-round adversarial attacks. IEEE Transactions on Artificial Intelligence, 4(5), (2022)1029-1040. https://doi.org/10.1109/TAI.2022.3190816

R. Gipiškis, D. Chiaro, M. Preziosi, E. Prezioso, F. Piccialli, The Impact of Adversarial Attacks on Interpretable Semantic Segmentation in Cyber–Physical Systems. IEEE Systems Journal, 17(4), (2023) 5327-5334. https://doi.org/10.1109/JSYST.2023.3281079

T. Chen, Z. Ma, Toward Robust Neural Image Compression: Adversarial Attack and Model Finetuning. IEEE Transactions on Circuits and Systems for Video Technology, 33(12), (2023) 7842-7856. https://doi.org/10.1109/TCSVT.2023.3276442

S. Yan, J. Ren, W. Wang, L. Sun, W. Zhang, Q. Yu, A Survey of Adversarial Attack and Defense Methods for Malware Classification in Cyber Security. IEEE Communications Surveys & Tutorials, 25(1), (2023) 467-496. https://doi.org/10.1109/COMST.2022.3225137

J. Pi, J. Zeng, Q. Lu, N. Jiang, H. Wu, L. Zeng, Z. Wu, Adv-Eye: A Transfer-Based Natural Eye Makeup Attack on Face Recognition. IEEE Access, 11, (2023) 89369-89382. https://doi.org/10.1109/ACCESS.2023.3307132

R. Li, H. Liao, J. An, C. Yuen, L. Gan, IntraClass Universal Adversarial Attacks on Deep Learning-Based Modulation Classifiers. IEEE Communications Letters, 27(5), (2023) 1297-1301. https://doi.org/10.1109/LCOMM.2023.3261423

X. Yuan, Z. Zhang, X. Wang, L. Wu, Semantic-Aware Adversarial Training for Reliable Deep Hashing Retrieval. IEEE Transactions on Information Forensics and Security, 18, (2023) 4681-4694. https://doi.org/10.1109/TIFS.2023.3297791

L. Xu, J. Zhai, DCVAE-adv: A universal adversarial example generation method for white and black box attacks. Tsinghua Science and Technology, 29(2), (2023) 430-446. https://doi.org/10.26599/TST.2023.9010004

L. Chen, Q.X. Zhu, Y.L. He, Adversarial attacks for neural network-based industrial soft sensors: Mirror output attack and translation mirror output attack. IEEE Transactions on Industrial Informatics, 20(2), (2023) 2378-2386. https://doi.org/10.1109/TII.2023.3291717

Downloads

Published

2025-03-19

How to Cite

1.
Patil C, Zuber M. DMAPDQV: Design of an effective Pre-emption Model to Detect & Mitigate Adversarial Attacks using Deep Dyna Q with VARMAx. Int. Res. J. multidiscip. Technovation [Internet]. 2025 Mar. 19 [cited 2025 Oct. 3];7(2):60-73. Available from: https://asianrepo.org/index.php/irjmt/article/view/123