Defending Adversarial Examples by Negative Correlation Ensemble
The security issues in DNNs, such as adversarial examples, have attracted much attention. Adversarial examples refer to the examples which are capable to induce the DNNs return incorrect predictions by introducing carefully designed perturbations. Obviously, adversarial examples bring great security risks to the real-world applications of deep learning. Recently, some defence approaches against adversarial examples have been proposed. However, the performance of these approaches are still limited. In this paper, we propose a new ensemble defence approach named the Negative Correlation Ensemble (NCEn), which achieves competitive results by making each member of the ensemble negatively correlated in gradient direction and gradient magnitude. NCEn can reduce the transferability of the adversarial samples among the members in ensemble. Extensive experiments have been conducted, and the results demonstrate that NCEn could improve the adversarial robustness of ensembles effectively.
Cited Times [WOS]:0
|Document Type||Conference paper|
|Department||Southern University of Science and Technology|
1.School of Computer Science and Technology,Harbin Institute of Technology,Shenzhen,Guangdong,518055,China
2.Guangdong Provincial Key Laboratory of Brain-inspired Intelligent Computation,School of Computer Science and Engineering,Southern University of Science and Technology,Shenzhen,Guangdong,518055,China
Luo，Wenjian,Zhang，Hongwei,Kong，Linghao,et al. Defending Adversarial Examples by Negative Correlation Ensemble[C],2022:424-438.
|Files in This Item:||There are no files associated with this item.|
|Recommend this item|
|Export to Endnote|
|Export to Excel|
|Export to Csv|
|Similar articles in Google Scholar|
|Similar articles in Baidu Scholar|
|Similar articles in Bing Scholar|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.