Abstract: Sign language acts as a key communication method for individuals with hearing difficulties, but obstacles often arise in conversations between sign language users and the general population. People engage with one another to express their ideas, feelings, and experiences, but this is not true for those who are disabled or mute. Sign language enables individuals who are mute to communicate without relying on vocal sounds. The goal of this project is to create a system that can recognize sign language, facilitating communication between individuals with speech impairments and those without, thus bridging the communication divide between them. In comparison to other forms of gestures like those made with the arms, face, head, and body; hand gestures hold significant importance as they convey a person's thoughts more swiftly. The system employs machine learning and computer vision methodologies to identify and understand hand gestures linked to sign language. The Sign Language Translator system is designed to close this communication gap by converting sign language gestures into text or speech instantly.

Keywords: hand gestures, machine learning, hearing impairments, deaf.


PDF | DOI: 10.17148/IJARCCE.2025.14226

Open chat
Chat with IJARCCE