Signosphere: AI-Driven Sign Language Communication with Deep Learning Technology
Main Article Content
Abstract
This paper introduces a real-time AI-powered sign language translation application called Signosphere; it is an attempt to decrease the extent of communication issues faced by people with speech disabilities who rely on sign language as their mode of speech. The system incorporates computer vision, deep learning and natural language processing, to translate sign language gestures into sentences that can be displayed as text or heard as audio in different languages. Signosphere is a mobile application written in Kotlin on Android Studio. It interacts with python-based Vision Transformer (ViT) models to recognise the gestures. With tools like Mediapipe, OpenCV and Google Text-to-Speech (gTTS), the system accurately interprets dynamic hand gestures to form grammatically correct sentences. The model is fine-tuned for real-time usage and runs smoothly even on computationally constrained devices. Through increased accessibility and scalability, Signosphere enables hassle-free communication among the deaf and mute population, promoting inclusiveness and reducing obstacles for interaction