版权说明 操作指南
首页 > 成果 > 详情

Channel decoupling network for cross-modality person re-identification

认领
导出
Link by DOI
反馈
分享
QQ微信 微博
成果类型:
期刊论文
作者:
Chen, Jingying;Chen, Chang;Tan, Lei;Peng, Shixin
通讯作者:
Shixin Peng
作者机构:
[Peng, Shixin; Tan, Lei; Chen, Chang; Chen, Jingying] Cent China Normal Univ, Natl Engn Res Ctr E Learning, Wuhan, Peoples R China.
通讯机构:
[Shixin Peng] N
National Engineering Research Center for E-Learning, Central China Normal University, Wuhan, China
语种:
英文
关键词:
Person re-identification;Cross modality;Channel decoupling
期刊:
Multimedia Tools and Applications
ISSN:
1380-7501
年:
2023
卷:
82
期:
9
页码:
14091-14105
基金类别:
This work is funded in part by the National Natural Science Foundation of China (No.61702208),the Hubei Technological Innovation Special Fund(No.61702208), the MOE (Ministry of Education in China) Project of Humanities and Social Sciences (No.19YJC880068), the Hubei Provincial Natural Science Foundation of China (No.2019CFB347), the China Postdoctoral Science Foundation (No.2018M632889, No.2022T150250).
机构署名:
本校为第一机构
院系归属:
国家数字化学习工程技术研究中心
摘要:
Cross-modality person re-identification (CM-ReID) is a very challenging problem due to the discrepancy in data distributions between visible and near-infrared modalities. To obtain a robust sharing feature representation, existing methods mainly focus on image generation or feature constrain to decrease the modality discrepancy, which ignores the large gap between mixed-spectral visible images and single-spectral near-infrared images. In this paper, we address the problem by decoupling the mixed-spectral visible images into three single-spectral subspaces R, G, and B. By aligning the spectrum,...

反馈

验证码:
看不清楚,换一个
确定
取消

成果认领

标题:
用户 作者 通讯作者
请选择
请选择
确定
取消

提示

该栏目需要登录且有访问权限才可以访问

如果您有访问权限,请直接 登录访问

如果您没有访问权限,请联系管理员申请开通

管理员联系邮箱:yun@hnwdkj.com