Abstract: Communication barriers significantly hinder interaction between the deaf community and the wider world. This paper investigates an automatic system for Indian Sign Language (ISL) detection using MobileNetV2, a transfer learning architecture known for its efficiency. We leverage transfer learning from pre-trained MobileNetV2 weights to extract features from ISL images. To improve model performance for ISL detection, we incorporate linear bottleneck layers and squeeze-and-excitation blocks within the network. Additionally, separable convolutions are used to maintain accuracy while reducing computational complexity. This optimized MobileNetV2 architecture is then fine-tuned on a prepared ISL dataset for robust sign recognition. While limitations exist, this research paves the way for advancements in communication accessibility for the deaf community.

Keywords: Indian Sign Language (ISL),Sign Language Detection ,Deep Learning,MobileNetV2,Transfer Learning ,Linear Bottleneck Layers ,Squeeze-and-Excitation Block ,Communication Accessibility ,Deaf Community.

Cite:
Ashlesh Shenoy, Shawn Castelino, Shetty Sushank Mohandas, Vaibhav Nayak,Ms.Suma K, "Realtime conversation system for people with hearing and speech impairments", IJARCCE International Journal of Advanced Research in Computer and Communication Engineering, vol. 13, no. 3, 2024, Crossref https://doi.org/10.17148/IJARCCE.2024.133110.


PDF | DOI: 10.17148/IJARCCE.2024.133110

Open chat
Chat with IJARCCE