Abstract: Presently large amount of data is available on social networking sites, product review sites, blogs, forums etc. This data holds expressed opinions and sentiments. The volume, variety, velocity are properties of data, whether it comes from the Internet or an enterprise resource planning system, sentiment analysis system should get the data and analyze it. Due to the large volume of opinion rich web resources such as discussion forum, review sites, blogs and news corpora available in digital form, much of the current research is focusing on the area of sentiment analysis and opinion mining. Expression of any sentiment is a mixture of text, prosody, facial expression, body posture etc. Thus only text input cannot fully represent a sentiment. A multimodal system uses a combination of input modes e.g. text and audio or text and video or all of these three. This paper analyses the techniques used for sentiment data and also demonstrate that how individual model works. Extracting the sentiments from different input modes is achieved by different classifying techniques. The proposed method first digitizes the signal to extract the required properties. According to emotional Prosody studies, the tone of every person’s voice can be characterized by its pitch, loudness or intensity, timbre, speech rate and pauses, whose changes convey different information from the speaker to the listener

Keywords: Sentiments, Emotion, Video, Audio


PDF | DOI: 10.17148/IJARCCE.2019.8429

Open chat
Chat with IJARCCE