Abstract: The development of a Music Recommendation System involved the utilization of the FER-2013 and Age, Gender (Facial Data) datasets. The system utilizes the CNN architecture, commonly employed for such purposes, to train three separate models: Emotion, Gender, and Age. To enhance the models' performance, additional layers are incorporated into the training phase. These models are subsequently employed as classifiers. To predict the user's mood, age, and gender, a snapshot of the user captured through the camera is forwarded to the trained models. Based on the outcomes of these classifiers, various playlists sourced from a database are suggested to the user. The goal is to create a functional and user-friendly environment for music selection. Once the playlists are proposed, the user can select their desired playlist and begin listening to the recommended music.

Keywords: Deep Learning, CNN, Emotion, Age, Gender, Music Recommendation System.

PDF | DOI: 10.17148/IJARCCE.2023.124144

Open chat
Chat with IJARCCE