Abstract: Effective communication between the deaf community and the hearing population remains a significant challenge due to the limited prevalence of sign language proficiency. This project introduces a multi-sensor fusion-based gesture recognition system aimed at enhancing interaction for deaf individuals. The system integrates data from various sensors, such as), flex sensors, and electromyography (EMG) sensors, to accurately capture and interpret hand gestures. By employing advanced machine learning algorithms, including deep learning models, the system translates complex sign language gestures into textual or auditory outputs in real- time. This approach not only improves recognition accuracy but also ensures robustness against environmental variations, offering a reliable solution for seamless communication. The proposed system holds promise for applications in assistive technologies, facilitating better integration of deaf individuals into diverse social and professional settings.

Keywords: Multi-sensor fusion, gesture recognition, sign language interpretation, deaf communication.


Downloads: PDF | DOI: 10.17148/IJARCCE.2025.1411131

How to Cite:

[1] Prof. Niveditha B S, Kiran Ishwar Kuslapur, Goutham Chand, Himavanth B R, Aditya S Kalsagond, "Multi-Sensor Fusion Based Gesture Recognition for Enhanced Deaf Interaction," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2025.1411131

Open chat
Chat with IJARCCE