ALSA: Adaptive Low-light Correction and Self-Attention Module for Vehicle Re-Identification
DOI:
https://doi.org/10.37256/aie.4220232901Keywords:
vehicle re-identification, attention mechanism, image processing, deep learningAbstract
Multi-Camera Vehicle Re-identification and Tracking (MCVRT) is a challenging task that involves identifying and tracking vehicles across multiple camera views in a surveillance network. Multi-Target Multi-Camera Tracking (MTMCT) and vehicle Re-Identification (Re-ID) are two major technologies applied to MCVRT tasks. Variations in aspect ratio, occlusion, orientation, and lighting conditions make vehicle re-identification and multi-camera tracking challenging. While some existing methods address these problems, it remains a significant challenge in the field. Additionally, most Re-ID datasets only include images captured in well-lit environments, and the impact of dark images on the performance of existing models needs to be explored further. This paper presents a new approach to address the challenge of low-light images in vehicle re-identification and achieves state-of-the-art results on public datasets. Our approach is based on two key components: (i) an Adaptive Low-light correction and Self-Attention module (ALSA) for image pre-processing in Vehicle Re-ID networks, and (ii) a new loss function called Log Triplet Loss (LT-Loss). We evaluated the presented approach through computer simulations on the VeRi-776 dataset, and the results showed that our model achieved a Rank@1 accuracy of 98.99%, and also outperformed commonly used models on dark images. Our study highlights the importance of considering lighting conditions in vehicle re-identification and provides a new approach to address this challenge.