International Journal of Advanced Research in Computer and Communication Engineering

A monthly peer-reviewed online and print journal

ISSN Online 2278-1021
ISSN Print 2319-5940

Abstract: Communication through voice is one of the main components of affective computing in human-computer interaction. [5]In this type of interaction, properly comprehending the meanings of the words or the linguistic category and recognizing the emotion included in the speech is essential for enhancing the performance. In order to model the emotional state, the speech waves are utilized, which bear signals standing for emotions such as boredom, fear, joy and sadness etc...So we can find different speech signals of each subject. The most significant features that transfer the variations in the tone are classified into pitch and intensity categories. We can use, eleven features, namely, pitch, intensity, the first four formants and their bandwidths and standard deviation, are extracted. The proposed method first digitizes the signal to extract the required properties. According to emotional Prosody studies, the tone of every person’s voice can be characterized by its pitch, loudness or intensity, timbre, speech rate and pauses, whose changes convey different information from the speaker to the listener.[6]

Keywords: Speaker recognition, vocal emotion recognition, sentimental analysis, Emotion prediction, Text mining


PDF | DOI: 10.17148/IJARCCE.2018.71134