研究目的
To improve the robustness of neural networks against different adversarial noise models that can remarkably deteriorate the performance of a neural network which otherwise perform really well on normal (unperturbed) test data.
研究成果
The proposed K-Support norm based training method significantly improves the robustness of neural networks against adversarial noise compared to state-of-the-art techniques. However, improvement in robustness may not necessarily also improve generalization error.
研究不足
The study found that training neural networks with a noise model may not always improve accuracy on both perturbed as well as normal test set. Additionally, the K-Support method is not very robust against uniform random noise.
1:Experimental Design and Method Selection:
The study involves generating adversarial examples using K-support norm and training neural networks with these examples to improve robustness.
2:Sample Selection and Data Sources:
The MNIST and STL-10 datasets were used for training and testing.
3:List of Experimental Equipment and Materials:
MXNET was used to train all models.
4:Experimental Procedures and Operational Workflow:
The methodology includes generating adversarial samples, training the network using perturbed samples, and testing the network using a normal and perturbed test set.
5:Data Analysis Methods:
The performance of neural networks trained with the proposed technique was compared against state-of-the-art techniques using classification accuracies and precision-recall curves.
独家科研数据包,助您复现前沿成果,加速创新突破
获取完整内容