修车大队一品楼qm论坛51一品茶楼论坛,栖凤楼品茶全国楼凤app软件 ,栖凤阁全国论坛入口,广州百花丛bhc论坛杭州百花坊妃子阁

oe1(光电查) - 科学论文

3 条数据
?? 中文(中国)
  • Dual-Polarization Frequency Selective Rasorber With Independently Controlled Dual-Band Transmission Response

    摘要: It is of significant importance for any classification and recognition system, which claims near or better than human performance to be immune to small perturbations in the dataset. Researchers found out that neural networks are not very robust to small perturbations and can easily be fooled to persistently misclassify by adding a particular class of noise in the test data. This, so-called adversarial noise severely deteriorates the performance of neural networks, which otherwise perform really well on unperturbed dataset. It has been recently proposed that neural networks can be made robust against adversarial noise by training them using the data corrupted with adversarial noise itself. Following this approach, in this paper, we propose a new mechanism to generate a powerful adversarial noise model based on K-support norm to train neural networks. We tested our approach on two benchmark datasets, namely the MNIST and STL-10, using muti-layer perceptron and convolutional neural networks. Experimental results demonstrate that neural networks trained with the proposed technique show significant improvement in robustness as compared to state-of-the-art techniques.

    关键词: robustness,generalization,convolutional neural networks,adversarial,K-Support norm

    更新于2025-09-23 15:21:01

  • [IEEE 2019 PhotonIcs & Electromagnetics Research Symposium - Spring (PIERS-Spring) - Rome, Italy (2019.6.17-2019.6.20)] 2019 PhotonIcs & Electromagnetics Research Symposium - Spring (PIERS-Spring) - Functionalized Materials for Integrated Photonics: Hybrid Integration of Organic Materials in Silicon- based Photonic Integrated Circuits for Advanced Optical Modulators and Light-sources

    摘要: It is of significant importance for any classification and recognition system, which claims near or better than human performance to be immune to small perturbations in the dataset. Researchers found out that neural networks are not very robust to small perturbations and can easily be fooled to persistently misclassify by adding a particular class of noise in the test data. This, so-called adversarial noise severely deteriorates the performance of neural networks, which otherwise perform really well on unperturbed dataset. It has been recently proposed that neural networks can be made robust against adversarial noise by training them using the data corrupted with adversarial noise itself. Following this approach, in this paper, we propose a new mechanism to generate a powerful adversarial noise model based on K-support norm to train neural networks. We tested our approach on two benchmark datasets, namely the MNIST and STL-10, using muti-layer perceptron and convolutional neural networks. Experimental results demonstrate that neural networks trained with the proposed technique show significant improvement in robustness as compared to state-of-the-art techniques.

    关键词: robustness,generalization,convolutional neural networks,adversarial,K-Support norm

    更新于2025-09-23 15:19:57

  • [IEEE 2019 IEEE Research and Applications of Photonics in Defense Conference (RAPID) - Miramar Beach, FL, USA (2019.8.19-2019.8.21)] 2019 IEEE Research and Applications of Photonics in Defense Conference (RAPID) - Multipole and Metasurface Quantum Well Emitters

    摘要: It is of significant importance for any classification and recognition system, which claims near or better than human performance to be immune to small perturbations in the dataset. Researchers found out that neural networks are not very robust to small perturbations and can easily be fooled to persistently misclassify by adding a particular class of noise in the test data. This, so-called adversarial noise severely deteriorates the performance of neural networks, which otherwise perform really well on unperturbed dataset. It has been recently proposed that neural networks can be made robust against adversarial noise by training them using the data corrupted with adversarial noise itself. Following this approach, in this paper, we propose a new mechanism to generate a powerful adversarial noise model based on K-support norm to train neural networks. We tested our approach on two benchmark datasets, namely the MNIST and STL-10, using muti-layer perceptron and convolutional neural networks. Experimental results demonstrate that neural networks trained with the proposed technique show significant improvement in robustness as compared to state-of-the-art techniques.

    关键词: robustness,generalization,convolutional neural networks,adversarial,K-Support norm

    更新于2025-09-19 17:13:59