Optimization of Human Posture Recognition based on Multi-view Skeleton Data Fusion
DOI:
https://doi.org/10.5755/j01.itc.53.2.36044Keywords:
human posture recognition, vision sensor, multi-view, data fusion, coordinate transformationAbstract
This research introduces a novel method for fusing multi-view skeleton data to address the limitations encountered by a single vision sensor in capturing motion data, such as skeletal jitter, self-pose occlusion, and the reduced accuracy of three-dimensional coordinate data for human skeletal joints due to environmental object occlusion. Our approach employs two Kinect vision sensors concurrently to capture motion data from distinct viewpoints extract skeletal data and subsequently harmonize the two sets of skeleton data into a unified world coordinate system through coordinate conversion. To optimize the fusion process, we assess the contribution of each joint based on human posture orientation and data smoothness, enabling us to fine-tune the weight ratio during data fusion and ultimately produce a dependable representation of human posture. We validate our methodology using the FMS public dataset for data fusion and model training. Experimental findings demonstrate a substantial enhancement in the smoothness of the skeleton data, leading to enhanced data accuracy and an effective improvement in human posture recognition following the application of this data fusion method.
Downloads
Published
Issue
Section
License
Copyright terms are indicated in the Republic of Lithuania Law on Copyright and Related Rights, Articles 4-37.