Secure Hand Landmark-Enhanced CNN: Advancing Safe Sign Language Interpretation
Main Article Content
Abstract
Introduction: To bridge the gap between individuals with problems concerning listening and hearing, the sign language becomes an effective tool. Sign language is a complex structured language with variety in gesture, expressions, hand signals and movements to completely transfer the content correctly which becomes a challenge for automatic detection and interpretation accuracy is important. Advancements in Machine learning particularly in area of deep learning after Convolutional Neural Networks (CNNs), has open the avenues to improve accuracy and keep in mind the computer vision area for better object detection and made the accuracy a point to be keep forward for Sign language.
Objectives: This article proposes a novel approach to address the problem of interpreting American Sign language in alignment with security aspect of this complete CNN based model.
Methods This approach is a blended version of hand landmark characteristics and CNN feature extraction to improve the precision, accuracy and security of recognition of Indian sign language. The interesting point in this proposed method is that the finer details often missed by CNN will not be missed with this proposed CNN based technique for sign language. At the time of review and analysis, the proposed CNN has outperformed all other model for clarity in detecting the sign language hand gestures.
Results: Keeping in mind the facial expression, lightning, signer orientation and other background factors, the proposed CNN is resilient against variations in background
Conclusions:By focusing on accuracy and security of sign language interpretation our proposed model is more reliable and dependent as a communicating medium for the deaf and hard-of-hearing community.