Abstract: SLR seeks to translate gesture-based communication into text or voice to further elaborate on correspondence between the hard of hearing quiet people and other hearing people. Despite this action having a huge cultural impact, it is nonetheless quite difficult due to its complexity and wide variety of hand signals. Existing SLR methods utilize grouping models in consideration of hand-crafted features to handle communication via gestures developments. Lately, it is attempting to gather robust features that can adapt to the wide range of hand movements. We suggest a specific 3D convolutional neural network (CNN) to handle this problem


Downloads: PDF | DOI: 10.17148/IJARCCE.2024.131114

How to Cite:

[1] Adhya K S, Chinthan M M, Goutam Varma, Puneeth Kumar K N,Mr. Shivaraj B G, "An AI-Powered Companion for Deaf and Mute Communication," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2024.131114

Open chat
Chat with IJARCCE