📞 +91-7667918914 | ✉️ ijarcce@gmail.com
IJARCCE Logo
International Journal of Advanced Research in Computer and Communication Engineering A monthly Peer-reviewed & Refereed journal
ISSN Online 2278-1021ISSN Print 2319-5940Since 2012
IJARCCE adheres to the suggestive parameters outlined by the University Grants Commission (UGC) for peer-reviewed journals, upholding high standards of research quality, ethical publishing, and academic excellence.
← Back to VOLUME 13, ISSUE 6, JUNE 2024

EMOSOUND: An emotion-based Music recommendation system

Anaha K Madhu, Sana Vallippokkil, Sreelakshmi KS, Sonu Sojan, Prof. Arya TJ

DOI: 10.17148/IJARCCE.2024.13678

Abstract: In the realm of digital music consumption, navigating through extensive libraries poses a significant challenge for users. To address this, our project introduces an Emotion-Based Music Recommendation System, integrating Convolutional Neural Networks (CNN) and Haar Cascading algorithms. Our objective is to provide users with tailored music recommendations based on their emotional state and preferences. By harnessing CNN, we delve into the intricate nuances of facial expressions, enabling accurate emotion detection. This deep learning approach allows our system to discern subtle emotional cues, enhancing the precision of music recommendations. Additionally, the integration of Haar Cascading algorithms facilitates efficient face detection, ensuring seamless user interaction. Through the fusion of CNN and Haar Cascading, our system offers a holistic solution to the challenges of music selection, alleviating decision-making stress and enhancing the user experience. With the ability to capture and interpret users' emotional states, our system empowers users to effortlessly discover music that resonates with their mood. Moreover, by incorporating feedback mechanisms, we continuously refine and optimize our recommendation algorithm, further enhancing its accuracy and effectiveness. In summary, our Emotion-Based Music Recommendation System represents a convergence of cutting-edge technologies, aimed at revolutionizing the way users interact with their music libraries. Through the synergy of CNN and Haar Cascading, we present a user-centric solution poised to elevate music listening experiences and redefine personalized music recommendation.

How to Cite:

[1] Anaha K Madhu, Sana Vallippokkil, Sreelakshmi KS, Sonu Sojan, Prof. Arya TJ, “EMOSOUND: An emotion-based Music recommendation system,” International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), DOI: 10.17148/IJARCCE.2024.13678