Evolutionary Multi-Label Adversarial Examples: An Effective Black-Box Attack
Studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attack. Minor malicious modifications of examples will lead to the DNN misclassification. Such maliciously modified examples are called adversarial examples. So far, the work on adversarial examples is mainly focused on multi-class classification tasks, there is less work in the field of multi-label classification. In this paper, for the first time, a differential evolution (DE) algorithm that can effectively generate multi-label adversarial examples is proposed, which is called MLAE-DE. Different from traditional differential evolution, we designed a complementary mutation operator for MLAE-DE, which can improve attack performance and reduce the number of fitness evaluations. As a black-box attack, MLAE-DE does not need to access model parameters, and only uses model outputs to generate adversarial examples. Experiments on two typical multi-label classification models and three typical datasets under the black-box settings are conducted in this paper. Experimental results demonstrate that, compared with existing black-box attack algorithms for multi-label classification models, the attack success rate of our proposed algorithm is much better.
|EI Accession Number|
Classification (of information) ; Evolutionary algorithms ; Learning algorithms ; Neural network models ; Optimization ; Perturbation techniques
|ESI Classification Code|
Ergonomics and Human Factors Engineering:461.4 ; Information Theory and Signal Processing:716.1 ; Artificial Intelligence:723.4 ; Machine Learning:723.4.2 ; Information Sources and Analysis:903.1 ; Mathematics:921 ; Optimization Techniques:921.5
Cited Times [WOS]:0
|Document Type||Journal Article|
|Department||Southern University of Science and Technology|
1.Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies, the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China
2.Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, School of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China
Kong，Linghao,Luo，Wenjian,Zhang，Hongwei,et al. Evolutionary Multi-Label Adversarial Examples: An Effective Black-Box Attack[J]. IEEE Transactions on Artificial Intelligence,2022,PP(99):1-12.
Kong，Linghao,Luo，Wenjian,Zhang，Hongwei,Liu，Yang,&Shi，Yuhui.(2022).Evolutionary Multi-Label Adversarial Examples: An Effective Black-Box Attack.IEEE Transactions on Artificial Intelligence,PP(99),1-12.
Kong，Linghao,et al."Evolutionary Multi-Label Adversarial Examples: An Effective Black-Box Attack".IEEE Transactions on Artificial Intelligence PP.99(2022):1-12.
|Files in This Item:||There are no files associated with this item.|
|Recommend this item|
|Export to Endnote|
|Export to Excel|
|Export to Csv|
|Similar articles in Google Scholar|
|Similar articles in Baidu Scholar|
|Similar articles in Bing Scholar|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.