A Comprehensive Review on Road Condition Recognition Systems for Intelligent Driving: Deep Learning, Sensor Fusion, and Embedded Implementation Perspectives
Main Article Content
Abstract
This review summarizes the recent progresses of road-condition recognition systems for intelligent driving, with an emphasis on deep learning, multi-sensor fusion, and embedded deployment on edge devices. First, this work categorizes the methods proposed between the years 2020 and 2025 into the tasks of vision-based road damage detection and global road surface condition classification, pointing out the prevailing architectures, datasets, and evaluation metrics. Then, the performance of YOLOv8 and other state-of-the-art object detectors and segmentation models are explored for real-time identification of potholes, cracks, and surface anomalies by considering accuracy, latency, and robustness across diverse environmental conditions. The subsequent discussion points out the multi-sensor fusion approaches that merge camera data with mmWave radar and temperature and humidity and, finally, the visual texture information to detect hazardous surface states, such as wet roads, snow, and black ice. Following this, different aspects of their embedded implementation are discussed by comparing model complexity, frame rate, response time, and power consumption across various platforms, including NVIDIA Jetson, Raspberry Pi, and other resource-constrained hardware. Limitations in current works are identified concerning the issue of dataset availability and diversity, low-light and adverse weather performance, domain generalization, and sensor calibration and synchronization. Finally, the paper concludes by drawing guidelines toward future research on accurate, cost-effective, and scalable road-condition monitoring frameworks, including lightweight architectures, self-supervised and domain adaptive learning, standardized benchmarks, and deployment-ready designs for large-scale intelligent transportation systems.