Abstract: This research paper introduces a novel Android application designed to assist visually impaired individuals in recognizing and summating Indian currency notes. The application leverages a Convolutional Neural Network (CNN) with the MobileNet architecture, trained on a comprehensive dataset comprising images of 10, 20, 50, 100, 200, 500, and 2000 Indian Rupee notes. The primary functionalities of the application include swipe-based interactions, allowing users to perform currency summation, trigger text-to-speech conversion, and initiate the currency identification process.The development process involves meticulous data preprocessing, model training, and optimization to accommodate the constraints of mobile devices. We address the challenges of real-time currency recognition by implementing features such as audio feedback to inform users of the identified denomination promptly. Additionally, the user interface incorporates accessibility features, including screen reader compatibility and voice command support, ensuring a seamless and intuitive experience for individuals with visual impairments.

Keywords: Include at least 4 keywords or phrases.


PDF | DOI: 10.17148/IJARCCE.2024.13854

Open chat
Chat with IJARCCE