版权说明 操作指南
首页 > 成果 > 详情

DOUBLE ATTENTION TRANSFORMER FOR HYPERSPECTRAL IMAGE CLASSIFICATION

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
期刊论文
作者:
Tang, Ping;Zhang, Meng;Liu, Zhihui;Song, Rong
通讯作者:
Zhang, Meng(m.zhang@mail.ccnu.edu.cn)
作者机构:
[Tang, Ping; Zhang, Meng] Cent China Normal Univ, Sch Comp, Wuhan 430079, Peoples R China.
[Liu, Zhihui] China Univ Geosci, Sch Math & Phys, Wuhan 430074, Peoples R China.
[Song, Rong] Cent China Normal Univ, Sch Marxism, Wuhan 430079, Peoples R China.
通讯机构:
[Zhang, M.] C
Central China Normal University, China
语种:
英文
关键词:
Feature extraction;Transformers;Fuses;Data mining;Tokenization;IP networks;Correlation;Double-attention transformer encoder (DATE);hyperspectral image (HSI) classification;vision transformer (ViT)
期刊:
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS
ISSN:
1545-598X
年:
2023
卷:
20
页码:
1-5
基金类别:
10.13039/501100001809-National Natural Science Foundation of China (Grant Number: 42274172) 10.13039/501100012456-National Social Science Fund of China (Grant Number: 19BZX105)
机构署名:
本校为第一机构
院系归属:
计算机学院
摘要:
Convolutional neural networks (CNNs) have become one of the most popular tools to tackle hyperspectral image (HSI) classification tasks. However, CNN suffers from the long-range dependencies problem, which may degrade the classification performance. To address this issue, this letter proposes a transformer-based backbone network for HSI classification. The core component is a newly designed double-attention transformer encoder (DATE), which contains two self-attention modules, termed spectral attention module (SPE) and spatial attention module ...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com