Abstract: The goal of this research is to construct an accurate sign language recognition model by experimenting with several segmentation methodologies and unsupervised learning algorithms. We tested with just up to 10 different classes/letters in our self-made dataset instead of all 26 potential letters to make the problem easier to approach and produce reasonable results. Using a Microsoft Kinect, we acquired 12000 RGB photos and their related depth data. The autoencoder was used to extract features from up to half of the data, while the other half was used for testing. Using our trained model, we were able to attain a classification accuracy of 98 percent on a randomly selected set of test data. We built a live demo version of the project in addition to the work we did on static photos.Techniques for colour and depth segmentation were the most reliable.

Keywords: RGB, Kinect, sign language, accuracy, algorithm, reasonable, dataset.


PDF | DOI: 10.17148/IJARCCE.2022.11220

Open chat
Chat with IJARCCE