Abstract: The only way speech and hearing-impaired people can communicate is by using the sign language. The main problem with this kind of communication is that the non-impaired people, who cannot understand the sign language, would not be able to communicate with these people or vice versa. This thesis is about intentionally researching a algorithm which can to allow deaf and dumb communities to communicate effectively. Thus, this study is also aimed to extract features from finger and hand motions. This paper shows the American sign language which recognizes 26 hand gestures. The system contains four modules such as: pre-processing and hand segmentation, feature extraction, sign recognition, and sign to text and audio. The project uses an image processing system to identify, especially English alphabetic sign language, and convert it into text. The proposed model takes image sequences and extracts temporal and spatial features from them. We then use a CNN (Convolution Neural Network) for recognizing spatial features.

Keywords: Convolution Neural Networks (CNN), Machine learning, Sign Recognition, Rectified Linear Unit (ReLU)


PDF | DOI: 10.17148/IJARCCE.2021.105110

Open chat
Chat with IJARCCE