版权说明 操作指南
首页 > 成果 > 详情

A lightweight unsupervised adversarial detector based on autoencoder and isolation forest

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
期刊论文
作者:
Liu, Hui;Zhao, Bo;Guo, Jiabao;Zhang, Kehuan;Liu, Peng
通讯作者:
Liu, H
作者机构:
[Liu, Hui] Cent China Normal Univ, Sch Comp Sci, Wuhan 430079, Peoples R China.
[Liu, Hui] Cent China Normal Univ, Hubei Prov Key Lab Artificial Intelligence & Smart, Wuhan 430079, Peoples R China.
[Guo, Jiabao; Zhao, Bo] Wuhan Univ, Sch Cyber Sci & Engn, Wuhan 430072, Peoples R China.
[Zhang, Kehuan] Chinese Univ Hong Kong, Coll Informat Engn, Hong Kong 999077, Peoples R China.
[Liu, Peng] Penn State Univ, Coll Informat Sci & Technol, State Coll, PA 16801 USA.
通讯机构:
[Liu, H ] C
Cent China Normal Univ, Sch Comp Sci, Wuhan 430079, Peoples R China.
Cent China Normal Univ, Hubei Prov Key Lab Artificial Intelligence & Smart, Wuhan 430079, Peoples R China.
语种:
英文
关键词:
Deep neural networks;Adversarial examples;Adversarial detection;Autoencoder;Isolation forest
期刊:
Pattern Recognition
ISSN:
0031-3203
年:
2024
卷:
147
页码:
110127
基金类别:
National Natural Science Foundation of China [62172181]
机构署名:
本校为第一且通讯机构
院系归属:
计算机学院
摘要:
Although deep neural networks (DNNs) have performed well on many perceptual tasks, they are vulnerable to adversarial examples that are generated by adding slight but maliciously crafted perturbations to benign images. Adversarial detection is an important technique for identifying adversarial examples before they are entered into target DNNs. Previous studies that were performed to detect adversarial examples either targeted specific attacks or required expensive computation. Designing a lightweight unsupervised detector is still a challenging problem. In this paper, we propose an AutoEncoder...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com