Abstract:   Sign language is the only way the speech and hearing impaired (i.e dumb and deaf) people can communicate. The main difficulty of this way of communication is normal people who cannot understand sign language can’t communicate with these people or vice versa. Our project aims to bridge the gap between the speech and hearing impaired people and the normal people. An individual’s ability to communicate using speech and hearing is affected by Speech impairment disability. Sign language is other media of communication for people who are affected. The essential idea of this project is to make a system using which dumb people can importantly interact with all other people using their normal gestures. The system does not need the background to be perfectly black. It works on any background. The project requires image processing system to identify, mainly English alphabetic sign language used by the deaf people to communicate and converts them into text so that normal people can understand efficiently. To build a vision based application which offers sign language translation to text thus aiding communication between signers and non- signers is the main objective of project. The proposed model takes video sequences of signers and extracts temporal and spatial features from them. We then use a CNN (Convolutional Neural Network) for recognizing spatial features.

Keywords: Machine learning, Computer vision, Sign language, Convolutional Neural Networks (CNN), Sign recognition.


PDF | DOI: 10.17148/IJARCCE.2020.9535

Open chat
Chat with IJARCCE