AI-Powered Hand Gesture Recognition based on Adaptive Thresholding and Gaussian Blur for Human-Computer Interaction using CNN
Main Article Content
Abstract
A real-time system to recognize American Sign Language motions has been developed based on Adaptive Thresholding, Gaussian Blur using CNN to help Deaf and Dumb people and others communicate. The main objective of our work is to build a model that can recognize hand gestures from fingerspelling and combine individual gestures to form words. The study classifies American Sign Language fingerspelling movements recorded by a webcam using Convolutional Neural Networks and cutting-edge computer vision algorithms. With the use of highly advanced methods like adaptive thresholding and Gaussian blur using CNN the experimental results improved gesture prediction accuracy to an astounding 98%. This approach achieves remarkable performance by overcoming traditional limitations in gesture detection by integrating two layers of algorithms. Although the study shows the possibility for expanding the system to additional sign languages, the current focus is on American Sign Language.