Abstract: Around the world, thousands of individuals with hearing loss use sign languages which come in a variety of regional variations to communicate daily. Therefore, it is believed that strengthening communication and inclusion for this group of people requires the automated translation of sign languages. But this is challenging research subject due to a number of issues. Developing a consistent system is impractical due to the regional variations in sign language. This is one of the key obstacles. Still, sign language recognition technology holds promise for improving deaf community services by assisting in communication gaps and enhancing the general well-being of society. The goal of the” Real-Time Hand Gesture Detection for Sign Language Recognition using Python” project is to create a system that can instantly translate sign language movements into text. The system tracks and detects hand movements using computer vision techniques, and then classifies the gestures using machine learning algorithms. The classes” Ok, Open Hand, Peace, thumbs up Thumbs down and alphabets A-Z” can be detected by our suggested system. Python programming and the OpenCV library will be used to carry out the project’s computer vision tasks. In our suggested system, we create two distinct sections: the first uses the Exception architecture model to recognize hand motion photographs and forecast outcomes, while the second uses OpenCV to detect in real time using a webcam. With the help of the Exception architectural model, our suggested solution was able to achieve 90.34% training accuracy and 90.00% validation accuracy. A dataset of hand motions that were recorded with a webcam will be used to train the hand gesture identification model. The finished system will include an intuitive user interface to let people who don’t use sign language communicates with others who do. In a variety of contexts, including public areas, workplaces, and classrooms, it may enhance inclusivity and communication for individuals with speech or hearing impairments.

Keywords: Hand Gesture, Segmentation, Random Forest, OpenCV, TensorFlow, Keras, CNN, Deep learning, Scikit learn.


PDF | DOI: 10.17148/IJARCCE.2024.134108

Open chat
Chat with IJARCCE