版权说明 操作指南
首页 > 成果 > 详情

Graph-aware transformer for skeleton-based action recognition

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
期刊论文
作者:
Zhang, Jiaxu;Xie, Wei;Wang, Chao;Tu, Ruide;Tu, Zhigang
通讯作者:
Chao Wang<&wdkj&>Ruide Tu
作者机构:
[Wang, Chao; Zhang, Jiaxu; Tu, Zhigang] Wuhan Univ, State Key Lab Informat Engn Surveying, Wuhan 430072, Hubei, Peoples R China.
[Xie, Wei] Cent China Normal Univ, Sch Comp, Wuhan 430079, Hubei, Peoples R China.
[Tu, Ruide] Cent China Normal Univ, Sch Informat Management, Wuhan 430079, Hubei, Peoples R China.
通讯机构:
[Chao Wang; Ruide Tu] S
State Key Laboratory of Information Engineering in Surveying, Wuhan University, Wuhan, China<&wdkj&>School Of Information Management, Central China Normal University, Wuhan, China
语种:
英文
关键词:
Skeleton action recognition;Visual transformer;Graph-aware transformer;Velocity information of human body joints;Graph neural network
期刊:
VISUAL COMPUTER
ISSN:
0178-2789
年:
2023
卷:
39
期:
10
页码:
4501-4512
基金类别:
National Natural Science Foundation of China [62106177]; Joint Fund of the Ministry of Education of China [8091B032156]
机构署名:
本校为其他机构
院系归属:
计算机学院
信息管理学院
摘要:
Recently, graph convolutional networks (GCNs) play a critical role in skeleton-based human action recognition. However, most GCN-based methods still have two main limitations: (1) The semantic-level adjacency matrix of the skeleton graph is difficult to be manually defined, which restricts the perception field of GCN and limits its ability to extract the spatial–temporal features. (2) The velocity information of human body joints cannot be efficiently used and fully exploited by GCN, because GCN does not represent the correlation between the v...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com