Abstract: Sign language is a visual language that is used by deaf people as their mother tongue. Unlike acoustically conveyed sound patterns, sign language uses body language and manual communication to fluidly convey the thoughts of a person. It is achieved by simultaneously combining hand shapes, orientation and movement of the hands, arms or body. Lot of problems arise when people that are impaired from hearing or speaking try to communicate with normal people. The reason is that they mostly communicate using sign language and normal people generally don't know anything about sign language.
The objective of the project is to convert speech & text to sign language using Natural language processing capabilities of people with hearing disabilities or speaking disabilities. Also, to provide a user friendly tool which reduces the amount of effort spent on communication. We are developing a system that converts speech/text to sign language animation. This system is composed of an Automatic speech recognizer. Live uttered input speech is captured through a microphone then it is translated to text through some speech recognition engine. The recognized text will be input to an ASL database on a word basis looking for a match. The database contains a certain number of prerecorded video animation signs where mainly there is one video clip per each basic word. If a match occurred, the equivalent ASL translation will be displayed following the Signed English (SE) manual as a parallel to English rather than following the ASL syntax. Otherwise, the word will be fingerspelled. Finally, both recognized text and ASL translation will be displayed concurrently as a final output.
Keywords: Speech Recognition, Speech-to-text, Natural language processing, Sign language
| DOI: 10.17148/IJARCCE.2021.101254