Abstract: Human–Machine Interface (HMI) design is increasingly moving towards more intuitive, contactless, and efficient modes of interaction. This paper presents a gesture-based framework in which humans communicate with machines or online platforms via normal hand gestures. Integrating computer vision with machine learning, the system described can capture, recognize, and understand gestures in real time without the need for conventional input devices. All identified gestures are assigned a pre-determined command so that the interface is applicable in healthcare, industrial automation, and home automation, where physical contact can be restricted or unwanted. The system prioritizes accuracy and responsiveness to deliver smooth user experience and also provides major benefits for users with disabilities. Overall, the research presents gesture recognition as a reliable, hygienic, and user-friendly option for next-generation human–machine interaction.

Keywords-Human–Machine Interfaces (HMI), Intelligent Interface, Computer Vision, Hand Gesture Control, Machine Learning, Real-Time Interaction, Touchless Interface, Image Processing, User-Centered Design, Technology for Accessibility, Natural User Interface (NUI), Human–Computer Interaction (HCI), Sensor-Based Interaction, Contactless Control.


Downloads: PDF | DOI: 10.17148/IJARCCE.2025.14836

How to Cite:

[1] Gowthami. A, Deepika S, Mr. Naveen J, "“Advances in Gesture-Driven Human-Machine Interfaces: Recognition Strategies, Challenges and Future Outlook”.," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2025.14836

Open chat
Chat with IJARCCE