Research on Pedestrian Detection Based on Multimodal Infor-mation Fusion

Authors

  • Xiaoping Yang 1 School of Information Science and Engineering, Guilin University of Technology, Guilin, Guangxi, 541004, China 2 Guangxi Key Laboratory of Embedded Technology and Intelligent System, Guilin University of Technology, Guilin, Guangxi, 541004, China
  • Zhehong Li 1 School of Information Science and Engineering, Guilin University of Technology, Guilin, Guangxi, 541004, China 2 Guangxi Key Laboratory of Embedded Technology and Intelligent System, Guilin University of Technology, Guilin, Guangxi, 541004, China
  • Yuan Liu College of Intelligent Medicine and Biotechnology, Guilin Medical University, Guilin, Guangxi, 541004, China
  • Ran Huang 1 School of Information Science and Engineering, Guilin University of Technology, Guilin, Guangxi, 541004, China 2 Guangxi Key Laboratory of Embedded Technology and Intelligent System, Guilin University of Technology, Guilin, Guangxi, 541004, China
  • Kai Tan 1 School of Information Science and Engineering, Guilin University of Technology, Guilin, Guangxi, 541004, China 2 Guangxi Key Laboratory of Embedded Technology and Intelligent System, Guilin University of Technology, Guilin, Guangxi, 541004, China
  • Lin Huang 1 School of Information Science and Engineering, Guilin University of Technology, Guilin, Guangxi, 541004, China 2 Guangxi Key Laboratory of Embedded Technology and Intelligent System, Guilin University of Technology, Guilin, Guangxi, 541004, China

DOI:

https://doi.org/10.5755/j01.itc.52.4.33766

Keywords:

Multi-spectral pedestrian detection, Faster R-CNN, Generalized intersection over union, feature fusion

Abstract

The automatic driving system based on a single-mode sensor is susceptible to the external environment in pedestrian detection. This paper proposes a fusion of light and thermal infrared multimodal pedestrian detection methodology. Firstly, 1 × 1 convolution and dilated convolution square measure are introduced within the residual network, and also the ROIAlign methodology is employed to exchange the ROIPooling methodology to map the candidate box to the feature layer to optimize the Faster R-CNN. Secondly, the generalized intersection over union (GIoU) loss function is employed as the loss function of prediction box positioning regression. Finally, to explore the performance of multimodal image pedestrian detection methods in different fusion periods in the improved Faster R-CNN, four forms of multimodal neural network structures are designed to fuse visible and thermal infrared pictures. Experimental results show that the proposed algorithm performs better on the KAIST dataset than current mainstream detection algorithms. Compared to the conventional ACF + T + THOG pedestrian detector, the AP is 8.38 percentage points greater. The miss rate is 5.34 percentage points lower than the visible light pedestrian detector.

Downloads

Published

2024-01-12

Issue

Section

Articles