- 标题
- 摘要
- 关键词
- 实验方案
- 产品
-
[IEEE 2019 IEEE 46th Photovoltaic Specialists Conference (PVSC) - Chicago, IL, USA (2019.6.16-2019.6.21)] 2019 IEEE 46th Photovoltaic Specialists Conference (PVSC) - Ge virtual substrates for high efficiency III-V solar cells: applications, potential and challenges
摘要: Motion capture is an important technique with a wide range of applications in areas such as computer vision, computer animation, ?lm production, and medical rehabilita- tion. Even with the professional motion capture systems, the acquired raw data mostly contain inevitable noises and outliers. To denoise the data, numerous methods have been developed, while this problem still remains a challenge due to the high com- plexity of human motion and the diversity of real-life situations. In this paper, we propose a data-driven-based robust human motion denoising approach by mining the spatial-temporal pat- terns and the structural sparsity embedded in motion data. We ?rst replace the regularly used entire pose model with a much ?ne-grained partlet model as feature representation to exploit the abundant local body part posture and movement similari- ties. Then, a robust dictionary learning algorithm is proposed to learn multiple compact and representative motion dictionaries from the training data in parallel. Finally, we reformulate the human motion denoising problem as a robust structured sparse coding problem in which both the noise distribution informa- tion and the temporal smoothness property of human motion have been jointly taken into account. Compared with several state-of-the-art motion denoising methods on both the synthetic and real noisy motion data, our method consistently yields better performance than its counterparts. The outputs of our approach are much more stable than that of the others. In addition, it is much easier to setup the training dataset of our method than that of the other data-driven-based methods.
关键词: (cid:2)2,p-norm,robust dictionary learning,Microsoft Kinect,robust structured sparse coding,motion capture data,Human motion denoising
更新于2025-09-23 15:19:57
-
[IEEE 2019 Photonics North (PN) - Quebec City, QC, Canada (2019.5.21-2019.5.23)] 2019 Photonics North (PN) - Human Cardiac Tissue Collagen Polarity Revealed Using Polarimetric Second-Harmonic Generation Microscopy
摘要: Motion capture is an important technique with a wide range of applications in areas such as computer vision, computer animation, ?lm production, and medical rehabilita- tion. Even with the professional motion capture systems, the acquired raw data mostly contain inevitable noises and outliers. To denoise the data, numerous methods have been developed, while this problem still remains a challenge due to the high com- plexity of human motion and the diversity of real-life situations. In this paper, we propose a data-driven-based robust human motion denoising approach by mining the spatial-temporal pat- terns and the structural sparsity embedded in motion data. We ?rst replace the regularly used entire pose model with a much ?ne-grained partlet model as feature representation to exploit the abundant local body part posture and movement similari- ties. Then, a robust dictionary learning algorithm is proposed to learn multiple compact and representative motion dictionaries from the training data in parallel. Finally, we reformulate the human motion denoising problem as a robust structured sparse coding problem in which both the noise distribution informa- tion and the temporal smoothness property of human motion have been jointly taken into account. Compared with several state-of-the-art motion denoising methods on both the synthetic and real noisy motion data, our method consistently yields better performance than its counterparts. The outputs of our approach are much more stable than that of the others. In addition, it is much easier to setup the training dataset of our method than that of the other data-driven-based methods.
关键词: Microsoft Kinect,robust structured sparse coding,Human motion denoising,motion capture data,robust dictionary learning,(cid:2)2,p-norm
更新于2025-09-19 17:13:59