Optimized Multi-Scale Deep Learning Models for Traffic Light Detection: A Comparative Evaluation of YOLO, FPB, and PANet Using Kaggle Dataset
Main Article Content
Abstract
Introduction: In the context of autonomous driving, accurate and efficient recognition of traffic lights and signs is essential for ensuring road safety and effective vehicle control. Recent advances in deep learning have led to the development of powerful object detection algorithms, but their comparative effectiveness under real-world driving conditions remains underexplored.
Objectives: This study aims to comparatively evaluate three leading object detection algorithms—YOLO (You Only Look Once), FPB (Feature Pyramid Block), and PANet (Path Aggregation Network)—with a specific focus on their ability to detect traffic lights and signs in autonomous driving environments.
Methods: Using a carefully curated dataset from Kaggle, the three models are tested across a range of environmental conditions. The evaluation metrics include detection accuracy, computational efficiency, and resource consumption, especially in scenarios involving occlusion and lighting variation.
Results: The experimental results reveal the distinct strengths and weaknesses of each algorithm. While some models excel in overall detection accuracy, others demonstrate superior efficiency or better performance in recognizing small-scale traffic elements under challenging conditions.
Conclusions: This comparative analysis provides valuable guidance for selecting appropriate object detection algorithms in real-time intelligent transportation systems. The findings contribute to enhancing the visual perception capabilities of autonomous vehicles, promoting safer and more reliable transportation technologies.