科研动态

yL23411永利官网登录 > 科学研究 > 科研动态 > 正文

陈时雨博士在SCI期刊Sensors发表论文

时间:2022-10-03 10:54:36 来源:永利yl23411官网科研与研究生管理办公室 作者:闫军辉 阅读:

标题:ULMR: An Unsupervised Learning Framework for Mismatch Removal

作者:Cailong Deng, Shiyu Chen*, Yong Zhang, Qixin Zhang, Feiyan Chen

来源出版物:MDPI

DOIhttps://doi.org/10.3390/s22166110

出版年:2022

文献类型:Article

语种:英文

摘要:Due to radiometric and geometric distortions between images, mismatches are inevitable. Thus, a mismatch removal process is required for improving matching accuracy. Although deep learning methods have been proved to outperform handcraft methods in specific scenarios, including image identification and point cloud classification, most learning methods are supervised and are susceptible to incorrect labeling, and labeling data is a time-consuming task. This paper takes advantage of deep reinforcement leaning (DRL) and proposes a framework named unsupervised learning for mismatch removal (ULMR). Resorting to DRL, ULMR firstly scores each state–action pair guided by the output of classification network; then, it calculates the policy gradient of the expected reward; finally, through maximizing the expected reward of state–action pairings, the optimal network can be obtained. Compared to supervised learning methods (e.g., NM-Net and LFGC), unsupervised learning methods (e.g., ULCM), and handcraft methods (e.g., RANSAC, GMS), ULMR can obtain higher precision, more remaining correct matches, and fewer remaining false matches in testing experiments. Moreover, ULMR shows greater stability, better accuracy, and higher quality in application experiments, demonstrating reduced sampling times and higher compatibility with other classification networks in ablation experiments, indicating its great potential for further use.

关键词:unsupervised learning; mismatch removal; reinforcement learning; policy gradient; expected reward

影响因子:3.9

论文链接:https://www.mdpi.com/1424-8220/22/16/6110

源代码地址:https://github.com/csyhy1986/ULMR

编辑:闫军辉