← Back to VOLUME 15, ISSUE 3, MARCH 2026
This work is licensed under a Creative Commons Attribution 4.0 International License.
Multi Model Emotion Aware Conventional Chatbot Using Facial Expression and Text Sentiment Fusion
Mrs. K. Tejaswi, Ch. Pushpa Manasa, D. Hima Sravanthi, B. Pravallika, G. Girishma
DOI: 10.17148/IJARCCE.2026.153129
Abstract: Understanding or analyzing the human emotion is difficult for smart, adaptive human-computer interaction systems. This project proposed a multi-modal emotion-aware conversational chatbot that recognizes emotions from the users by the help of facial expression, text-based sentiment analysis from speech-to-text conversion and also written text. Facial emotions are extracted using computer vision techniques. Such as eye movement, mouth shape, and eyebrow position, based on all these it captures the emotion state. while speech is converted to text, and also by direct text using natural language processing (NLP) methods. The emotional information obtained from both modalities is combined using a weighted fusion mechanism to clarify the userβs overall emotional state. Based on this, the chatbot generates emotionally appropriate and relevant context- aware responses. The proposed system adapts human-centered interactions and shows its utility for mental health support. Using OpenCV facial expression recognition, VGG16 convolutional neural networks, automatic speech recognition (speech-to-text), VADER sentiment analysis, and weighted multimodal fusion. Experimental results show that the performance of emotion recognition information response relevance compared to single- modality systems, leading to the that multimodal emotion fusion greatly improves the effectiveness and empathy of conversational AI systems.
Index Terms: Multimodal Emotion Analysis, Emotion-Aware Chatbot, Facial Expression Recognition, Sentiment Analysis, OpenCV, VGG16, Natural Language Processing, VADER
Index Terms: Multimodal Emotion Analysis, Emotion-Aware Chatbot, Facial Expression Recognition, Sentiment Analysis, OpenCV, VGG16, Natural Language Processing, VADER
π 32 views
How to Cite:
[1] Mrs. K. Tejaswi, Ch. Pushpa Manasa, D. Hima Sravanthi, B. Pravallika, G. Girishma, βMulti Model Emotion Aware Conventional Chatbot Using Facial Expression and Text Sentiment Fusion,β International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2026.153129
