← Back to VOLUME 15, ISSUE 3, MARCH 2026
MULTI-LEVEL SIGN LANGUAGE RECOGNITION SYSTEM
Abstract: Sign language is an important medium of communication for hearing and speech-impaired people. Nevertheless, there still exist communication gaps between sign language speakers and the general public owing to a lack of understanding of sign language gestures. This paper presents a Multi-Level Indian Sign Language (ISL) Recognition System that can identify word-level as well as sentence-level gestures from video inputs. The proposed system employs a hybrid deep learning model that combines Convolutional Neural Networks (CNN) for spatial feature extraction, Bi- Directional Long Short-Term Memory (BiLSTM) for modeling temporal sequences, and an Attention Mechanism to selectively, concentrate on important frames in the gesture sequence. The system analyzes video frames, extracts spatial- temporal features, andclassifiesthem to predict the corresponding word and sentence. Besides text output, the system also offers multimodal feedback in terms of synthesized speech and visualization of output, thus improving accessibility and usability. The proposed method is expected to fill the communication system.
Keywords: Indian sign language (ISL),CNN, BiLSTM, Attention mechanism, video gesture recognition, multimodal assistive system.
Keywords: Indian sign language (ISL),CNN, BiLSTM, Attention mechanism, video gesture recognition, multimodal assistive system.
How to Cite:
[1] NALINA P, Dr REVATHI A, “MULTI-LEVEL SIGN LANGUAGE RECOGNITION SYSTEM,” International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2026.153143
