Abstract: This project aims to develop an innovative communication system that bridges the gap between deaf and hearing individuals by addressing both verbal and visual communication barriers. The system is implemented in two phases. In the first phase, the focus is on converting audio messages into Indian Sign Language (ISL). Audio input, either live or pre-recorded, is transcribed into text using advanced speech recognition technologies. This text is then mapped to predefined ISL images or GIFs, enabling seamless communication by making spoken language accessible to the deaf community through visual sign representations.
The second phase enhances the system's ability to interpret visual information for the deaf. Images are collected and used to train a Multilayer Perceptron (MLP) model, achieving a 90% accuracy in recognizing and interpreting these images. The model processes the images and converts them into corresponding text or speech outputs, allowing deaf individuals to understand visual cues through textual or spoken descriptions. This dual-phase approach not only facilitates effective communication between deaf and hearing individuals but also enhances the interaction of the deaf community with their environment.
| DOI: 10.17148/IJARCCE.2024.13848