欢迎光临,现在是     设为首页   加入收藏   联系我们
文章摘要
吴嫚,刘笑嶂.基于PCA的对抗样本攻击防御研究[J].海南大学学报编辑部:自然科学版,2019,37(2):.
基于PCA的对抗样本攻击防御研究
Research on adversarial samples attack defense based on PCA
投稿时间:2019-02-22  修订日期:2019-04-14
DOI:10.15886/j.cnki.hdxbzkb.2019.0020
中文关键词: 深度学习;PCA;对抗样本攻击;防御;FGSM
英文关键词: Deep learning; PCA; adversarial samples attack; defense; FGSM
基金项目:国家自然科学基金(61562017)
作者单位E-mail
吴嫚 海南大学信息科学技术学院数学系 17330937360@163.com 
刘笑嶂 海南大学信息科学技术学院数学系 lxzh@hainu.edu.cn 
摘要点击次数: 400
全文下载次数: 226
中文摘要:
      针对机器学习安全、防御对抗样本攻击问题,提出了基于PCA的对抗样本攻击防御方法.首先利用快速梯度符号(FGSM)非针对性攻击方式,敌手为白盒攻击,其次在MNIST数据集上进行PCA来防御深度神经网络模型的逃逸攻击,最后实验结果表明:PCA能够防御对抗样本攻击,在维度降至50维时防御效果达到最好.
英文摘要:
      At present, deep learning has become one of the most widely studied and applied technologies in the computer field. However, with the submission of anti-samples, its algorithms, models and training data face many security threats, which in turn affects the security of practical applications based on deep learning. Aiming at the problem of machine learning security and defense adversarial samples attack, a PCA-based anti-sample attack defense method is proposed, which uses the fast gradient sign method (FGSM) non-target attack method. The adversary is a white box attack, and PCA is performed on the MNIST dataset. Techniques to defend against escape attacks in deep neural network models. The experimental results show that PCA can defend adversarial samples attack, and the effect is best when the dimension reduction dimension is 50. With the dimension reduction dimension increasing, the ability to defend adversarial samples attack is getting lower and lower.
关闭