Comparative Analysis of Various Yolo Models for Sign Language Recognition with a specific dataset
Main Article Content
Abstract
Understanding and replaying sign language are the hardcore communication tasks between the normal person and to deaf and dumped person, and vice versa. To enhance sign language-based communication, several models have been developed for making sign language into an understandable format by translating gestures into words. The ultimate goal of this research paper is to analyse and compare the various You Only Look One (YOLO) models on SLR problem. YOLO is a fast and efficient convolutional neural networks (CNN) variant that provides a better solution for sign language problems. The comparison of different YOLO models with Indian Sign Language (ISL) dataset can provide a suitable YOLO model for SLR. Therefore, the proposed work has considered the ISign Benchmark dataset. The ISL-based comparison analysis is implemented on Python tool where the various performance metrics are calculated for selecting the best YOLO model . This will make a way to give a fast and efficient means for recognizing sign gestures.