Developing Multi-Channel Pulse Signal Analysis for Enhanced Heart Rate Detection Using Facial Video Systems
Main Article Content
Abstract
By leveraging advanced signal processing techniques and illumination correction, the study enhances the accuracy and robustness of heart rate detection systems under varying lighting conditions. This paper introduces the development of the Unified Pulse Detection from Complex Environments (UPDCE) Model, a deep learning framework designed for the non-invasive detection of heart rate from facial video data. Utilizing the UBFC-RPPG dataset, which includes video recordings under various illumination conditions, the model employs convolutional neural networks (CNNs) and recurrent neural networks (RNNs) to analyze multiple specific regions of interest namely, the forehead and chin. These areas are critical for capturing pulse signals influenced by subtle changes in skin coloration due to blood flow. The model processes video frames, extracted at a three-second interval, through stages of enhancement and normalization to improve data quality for subsequent analysis. Features are then extracted and temporally analyzed to detect and calculate heart rate accurately. Special emphasis is placed on overcoming challenges associated with diverse lighting and motion conditions. The system architecture ensures robust processing by incorporating techniques to optimize real-time operation and reduce computational load. The effectiveness of the UPDCE Model is validated through rigorous training and testing, demonstrating significant potential for real-world application in continuous health monitoring systems. This research contributes to advancements in remote photoplethysmography by highlighting methodological innovations and deployment strategies that enhance the accuracy and reliability of heart rate detection using facial video analysis.