Optimization of Human Posture Recognition based on Multi-view Skeleton Data Fusion

Authors

  • Yahong Xu Faculty of Information Engineering and Automation; Kunming University of Science and Technology; 665000, Yunnan, China
  • Shoulin Wei Faculty of Information Engineering and Automation; Kunming University of Science and Technology; 665000, Yunnan, China
  • Jibin Yin Faculty of Information Engineering and Automation; Kunming University of Science and Technology; 665000, Yunnan, China

DOI:

https://doi.org/10.5755/j01.itc.53.2.36044

Keywords:

human posture recognition, vision sensor, multi-view, data fusion, coordinate transformation

Abstract

This research introduces a novel method for fusing multi-view skeleton data to address the limitations encountered by a single vision sensor in capturing motion data, such as skeletal jitter, self-pose occlusion, and the reduced accuracy of three-dimensional coordinate data for human skeletal joints due to environmental object occlusion. Our approach employs two Kinect vision sensors concurrently to capture motion data from distinct viewpoints extract skeletal data and subsequently harmonize the two sets of skeleton data into a unified world coordinate system through coordinate conversion. To optimize the fusion process, we assess the contribution of each joint based on human posture orientation and data smoothness, enabling us to fine-tune the weight ratio during data fusion and ultimately produce a dependable representation of human posture. We validate our methodology using the FMS public dataset for data fusion and model training. Experimental findings demonstrate a substantial enhancement in the smoothness of the skeleton data, leading to enhanced data accuracy and an effective improvement in human posture recognition following the application of this data fusion method.

Downloads

Published

2024-06-26

Issue

Section

Articles