Abstract: Humans share a universal and fundamental set of emotions which are exhibited through consistent facial expressions, talking and writing/texting. An algorithm that performs detection, extraction, and evaluation of these expressions will allow for automatic recognition of human emotion in images, voice and text. Presented here is a hybrid feature extraction and facial expression recognition method that utilizes Viola-Jones cascade object detectors using Haar cascades extract faces and facial features, pyttsx3 which is a text-to-speech conversion library in Python and text2emotion support vector machine to determine emotion from text.
Downloads:
|
DOI:
10.17148/IJARCCE.2021.10557
[1] Prof. Nitin Dhawas, Sujata Junare, Dhanashree Kulkarni, Durgesh Patil, UtkarshBhangale, "Emotional and Mental Analyst-EMA," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2021.10557