Abstract: Communication through voice is one of the main components of affective computing in human-computer interaction. In this type of interaction, properly comprehending the meanings of the words or the linguistic category and recognizing the emotion included in the speech is essential for enhancing the performance. In order to model the emotional state, the speech waves are utilized, which bear signals standing for emotions such as boredom, fear, joy and sadness etc...So we can find different speech signals of each subject. The most signiﬁcant features that transfer the variations in the tone are classified into pitch and intensity categories. We can use, eleven features, namely, pitch, intensity, the ﬁrst four formants and their bandwidths and standard deviation, are extracted. The proposed method ﬁrst digitizes the signal to extract the required properties. According to emotional Prosody studies, the tone of every person’s voice can be characterized by its pitch, loudness or intensity, timbre, speech rate and pauses, whose changes convey different information from the speaker to the listener.
Keywords: Speaker recognition, vocal emotion recognition, sentimental analysis, Emotion prediction, Text mining
| DOI: 10.17148/IJARCCE.2018.71134