A Novel Employee Re-identification Method Based on Attention-free Capsule Network for Factory Surveillance Images

Authors

  • Xinbo Zhao Liaoning University of International Business and Economics, School of Management, Dalian, China, 116052
  • Nan Zhang Liaoning University of International Business and Economics, School of Management, Dalian, China, 116052
  • Wei Ding Dalian Polytechnic University, School of Fashion, Dalian, China, 116034

DOI:

https://doi.org/10.5755/j01.itc.54.3.40913

Keywords:

least square generation countermeasure network, wavelet Contourlet transform, image preprocessing, Gaussian mixture model, foreground segmentation, No attention capsule network

Abstract

Traditional methods have low recognition rate and poor robustness when dealing with complex factory employee monitoring images, so a new employee re recognition method based on Attention-free Capsule Network for factory monitoring images is proposed. Least Squares Generative Adversarial Network (LSGAN) is used to restore the factory monitoring image to repair the missing or damaged image caused by lighting, occlusion, noise, etc. Wavelet Contourlet transform is used to improve the details and clarity of images and the accuracy of subsequent staff re recognition. The mixed Gaussian model (GMM) is used to accurately segment the employee foreground in the image, and the segmented employee foreground image is input into the Attention-free Capsule Network. The feature is extracted through multi-layer convolution and pooling operations, and the dynamic routing mechanism is used to extract and aggregate employee identity features. After training, the employee identity tags are output to achieve efficient employee re recognition of factory monitoring images. The experimental results show that compared with visual attention, KISS+, and center and scale prediction methods, the proposed method introduces a novel approach for employee re identification in factory monitoring images. The proposed method demonstrates strong anti-interference and adaptability. In the comparison of key indicators, the proposed method achieved a recognition accuracy of 95.8 on Rank-1, 98.4 on Rank-5, and 99.5 on Rank-10. This series of numerical comparison results fully demonstrates that the proposed method has strong image processing capabilities, as well as high recognition accuracy and robustness. 

Downloads

Published

2025-10-14

Issue

Section

Articles