作者:
Chen, C. L. Philip*;Li, Hong;Wei, Yantao;Xia, Tian;Tang, Yuan Yan
期刊:
IEEE Transactions on Geoscience and Remote Sensing,2014年52(1):574-581 ISSN:0196-2892
通讯作者:
Chen, C. L. Philip
作者机构:
[Chen, C. L. Philip] Univ Macau, Fac Sci & Technol, Dept Comp & Informat Sci, Macau, Peoples R China.;[Xia, Tian; Tang, Yuan Yan] Univ Macau, Fac Sci & Technol, Macau, Peoples R China.;[Li, Hong] Huazhong Univ Sci & Technol, Sch Math & Stat, Wuhan 430074, Peoples R China.;[Wei, Yantao] Cent China Normal Univ, Coll Informat Technol Journalism & Commun, Wuhan 430079, Peoples R China.;[Wei, Yantao] Huazhong Univ Sci & Technol, Inst Pattern Recognit & Artificial Intelligence, Wuhan 430074, Peoples R China.
通讯机构:
[Chen, C. L. Philip] U;Univ Macau, Fac Sci & Technol, Dept Comp & Informat Sci, Macau, Peoples R China.
关键词:
Derived kernel (DK);Infrared (IR) image;Local contrast;Signal-to-noise ratio (SNR);Target detection
摘要:
Robust small target detection of low signal-to-noise ratio (SNR) is very important in infrared search and track applications for self-defense or attacks. Consequently, an effective small target detection algorithm inspired by the contrast mechanism of human vision system and derived kernel model is presented in this paper. At the first stage, the local contrast map of the input image is obtained using the proposed local contrast measure which measures the dissimilarity between the current location and its neighborhoods. In this way, target signal enhancement and background clutter suppression are achieved simultaneously. At the second stage, an adaptive threshold is adopted to segment the target. The experiments on two sequences have validated the detection capability of the proposed target detection method. Experimental evaluation results show that our method is simple and effective with respect to detection accuracy. In particular, the proposed method can improve the SNR of the image significantly.
期刊:
Applied Mechanics and Materials,2013年380-384:4695-4699 ISSN:1660-9336
作者机构:
[Chen, Li Juan; Zhao, Gang; Ye, Qiu Xu] College of Information Technology, Journalism and Communications, Huazhong Normal University, Wuhan, Hubei, China
会议名称:
2013 International Conference on Vehicle and Mechanical Engineering and Information Technology, VMEIT 2013
会议时间:
17 August 2013 through 18 August 2013
关键词:
Cloud computing;Coconstruction and sharing of resources;Educational resource management system (ERMS)
作者机构:
[Su, Jun; Qin, Hang] Computer School, Yangtze University, Jingzhou 434023, China;[Hu, Zhengbing] College of Information Technology, Journalism and Communication, Huazhong Normal University, Wuhan 430079, China
作者机构:
[邓鹤; 童名文; 魏艳涛] College of Information Technology, Journalism and Communications, Central China Normal University, Wuhan 430079, China;[瞿少成] College of Physics Science and Technology, Central China Normal University, Wuhan 430079, China
通讯机构:
College of Information Technology, Journalism and Communications, Central China Normal University, China
关键词:
Shape from images;Shape from silhouette;Viewing line;Visual hull
摘要:
Visual hull of object has been widely applied in many fields in computer vision. The methods computing visual hulls are mainly classified into two categories: surface-based approaches and volume-based approaches. Surface-based approaches are precise and lack robustness while volume-based approaches are robust but inaccurate and work slowly. However, the speeds of both methods cannot satisfy the demand of real-time application. The paper proposes a novel method to compute visual hull rapidly. To this aim, the method is based on viewing lines. First, the viewing lines are computed from all the contours of the object from different viewpoints. Secondly, the viewing lines are confined within limits and sampled into some dispersive points. Thirdly, a sequence of images of the object is used to exclude points which locate outside the surface of the visual hull. Finally, a water-tight surface of visual hull is extracted from the remnant points. Experiments with real data are conducted to test the rapidity of the proposed method. (C) 2013 Elsevier GmbH. All rights reserved.