Abstract:

This project aims to develop an emotion-based music recommender system using computer vision and machine learning techniques. The system utilizes a webcam feed to capture real- time facial expressions, which are then analyzed to infer the user's emotional state. The emotion detection is performed through facial landmark detection and deep learning-based classification models. The data collection process involves gathering facial landmark data along with hand gestures to create a dataset representing various emotions. This dataset is used to train a deep learning model capable of accurately predicting emotions from facial features.