Utilizing Silhouette and Head Information for Improved Cloth-changing Person Re-Identification

Authors

  • Yuzhang Li

DOI:

https://doi.org/10.56028/aetr.11.1.614.2024

Keywords:

Cloth-changing Person ReID; Silhouette and Head Information.

Abstract

In recent years, significant progress has been made in person re-identification (ReID). However, as the time span increases, the phenomenon of changing clothes often occur, and the performance of methods based on person re-identification may decrease in this case. In this article, we focus on solving the problem of cloth-changing person re-identification from a single RGB image, which is more challenging, and improving the accuracy and robustness of traditional methods for recognition. We propose a cloth-changing Feature Regularization learning framework by fusing Silhouette and Head information (SHFR), a two-stream framework which transfers the silhouette information and head information learned by auxiliary stream to the main stream and supplements features unrelated to clothing. Specifically, the main stream approach we adopt is the traditional cloth-changing person re-identification network. In the auxiliary stream, we use the DeepLabV3 semantic segmentation model to extract person silhouette features, and the FCHD fully convolutional head detection model to extract head information with advanced semantic information. We concatenate the head information and background information into the silhouette as input for BackBone to continue training the network. In order to utilize silhouette information and head information, we regularize the cloth-changing features on both main stream and auxiliary stream, allowing main stream model to pay more attention to features unrelated to clothing and improve the accuracy of model recognition. We conduct extensive experiments on our model on the PRCC dataset, and the results show that our method has highly competitive performance.

Downloads

Published

2024-07-18