Title | Evolutionary Multi-Label Adversarial Examples: An Effective Black-Box Attack |
Author | |
Publication Years | 2022
|
DOI | |
Source Title | |
ISSN | 2691-4581
|
EISSN | 2691-4581
|
Volume | PPIssue:99Pages:1-12 |
Abstract | Studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attack. Minor malicious modifications of examples will lead to the DNN misclassification. Such maliciously modified examples are called adversarial examples. So far, the work on adversarial examples is mainly focused on multi-class classification tasks, there is less work in the field of multi-label classification. In this paper, for the first time, a differential evolution (DE) algorithm that can effectively generate multi-label adversarial examples is proposed, which is called MLAE-DE. Different from traditional differential evolution, we designed a complementary mutation operator for MLAE-DE, which can improve attack performance and reduce the number of fitness evaluations. As a black-box attack, MLAE-DE does not need to access model parameters, and only uses model outputs to generate adversarial examples. Experiments on two typical multi-label classification models and three typical datasets under the black-box settings are conducted in this paper. Experimental results demonstrate that, compared with existing black-box attack algorithms for multi-label classification models, the attack success rate of our proposed algorithm is much better. |
Keywords | |
URL | [Source Record] |
Indexed By | |
Language | English
|
SUSTech Authorship | Others
|
EI Accession Number | 20223512666030
|
EI Keywords | Classification (of information)
; Evolutionary algorithms
; Learning algorithms
; Neural network models
; Optimization
; Perturbation techniques
|
ESI Classification Code | Ergonomics and Human Factors Engineering:461.4
; Information Theory and Signal Processing:716.1
; Artificial Intelligence:723.4
; Machine Learning:723.4.2
; Information Sources and Analysis:903.1
; Mathematics:921
; Optimization Techniques:921.5
|
Scopus EID | 2-s2.0-85136662415
|
Data Source | Scopus
|
PDF url | https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9857594 |
Citation statistics |
Cited Times [WOS]:0
|
Document Type | Journal Article |
Identifier | http://kc.sustech.edu.cn/handle/2SGJ60CL/395606 |
Affiliation | 1.Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies, the School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China 2.Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation, School of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, China |
Recommended Citation GB/T 7714 |
Kong,Linghao,Luo,Wenjian,Zhang,Hongwei,et al. Evolutionary Multi-Label Adversarial Examples: An Effective Black-Box Attack[J]. IEEE Transactions on Artificial Intelligence,2022,PP(99):1-12.
|
APA |
Kong,Linghao,Luo,Wenjian,Zhang,Hongwei,Liu,Yang,&Shi,Yuhui.(2022).Evolutionary Multi-Label Adversarial Examples: An Effective Black-Box Attack.IEEE Transactions on Artificial Intelligence,PP(99),1-12.
|
MLA |
Kong,Linghao,et al."Evolutionary Multi-Label Adversarial Examples: An Effective Black-Box Attack".IEEE Transactions on Artificial Intelligence PP.99(2022):1-12.
|
Files in This Item: | There are no files associated with this item. |
|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment