Abstract: This project introduces an intelligent system that integrates custom-trained large language models (LLMs), RFID-based mode switching, and cloud-based APIs to enable natural, context-aware human-machine interaction on resource-constrained devices. The system operates in three modes—Student Mode, General Mode, and Visitors Mode—each tailored to specific user needs, such as educational support, everyday tasks, and quick information retrieval. RFID technology allows seamless mode switching, while cloud APIs handle resource-intensive tasks like speech-to-text (STT) and text-to-speech (TTS), ensuring real-time responsiveness on low-power hardware like microprocessors. Applications span education, IoT, healthcare, and customer support, enabling voice-activated smart devices, interactive kiosks, and accessibility tools. By combining affordability, scalability, and advanced AI capabilities, this project bridges the gap between cutting-edge technology and practical, real-world solutions, making AI-driven systems more accessible and impactful across industries.
Keywords: LLM, RFID, cloud APIs, STT, TTS, IoT, education, healthcare, accessibility, low-power hardware.
|
DOI:
10.17148/IJARCCE.2025.14148