
A Novel Music-Based Emotion Profiling System
Synopsis
A music-based emotion profiling system creates personalised emotional profiles using music, electroencephalogram (EEG) data and self-assessments. It employs a dual-branch model and deep learning for accurate emotion regulation and potential mental disorder diagnosis, enhancing therapy effectiveness.
Opportunity
Our music-based emotion profiling system is designed to deliver personalised responses to emotional stimuli based on non-invasive brain activity recordings. This technology has various applications, ranging from emotion regulation to the diagnosis of specific mental disorders. Essentially, it functions as a pre-intervention profiling tool, automatically generating a personalised response pattern for each subject. This innovation allows therapists to select stimuli tailored to each patient's unique emotion profile, thereby enhancing the therapy's effectiveness.
Furthermore, the emotion profiling system can differentiate between healthy individuals and those with mental disorders, offering potential diagnostic insights. In contrast to the traditional self-assessment method, which relies solely on questionnaires, our system leverages deep learning to model emotion profiles using both continuous music stimulus labels and subject self-assessments. The process involves presenting the subject with music samples corresponding to different emotion types and intensities, followed by the collection of their emotional responses. This dual-branch profiling model utilises both continuous labels and self-assessments to construct a comprehensive emotion profile. Moreover, it is user-friendly and adaptable for use on PCs or mobile devices, making it easily accessible for both therapists and subjects alike.
Technology
Our music-based emotion profiling system constructs personalised emotion profiles through a three-step process. Firstly, the subject sits in a comfortable and relaxing place, and is exposed to music clips of desired emotional intensity, while having their EEG signals recorded.
Secondly, after each music clip, the subject articulates their emotional experience, providing self-ratings. The crux of the operation lies in the dual-branch emotion profiling block. This component utilises the EEG signals, emotional intensity variations in the music clips, as well as the self-assessments. A deep learning model, previously trained on a substantial emotional dataset, serves as the base learner. It comprises two vital parts: the feature learner (FL) and the class predictor (CP). The FL extracts features from the input, while the CP predicts emotions. To tailor the model to the subject, a personalised class predictor (PCP) is introduced, decoding self-ratings using embeddings from FL of the base predictor. The PCP undergoes training, and the FL is fine-tuned with freshly gathered EEG. An n-fold cross-validation ensures the model's accuracy.
During evaluation, both the CP and PCP work synergistically, generating the ultimate prediction of emotional responses to each music clip. By combining this prediction with the emotional intensity of the music clips, the personalised emotion profile is formed.
Figure 1: System diagram of the music-based emotion regulation system. There are 5 functional components in the proposed system: stimulus generation block (SGB), EEG acquisition block (EAB), self-assessment block (SAB), dual-branch emotion profiling block (DPB) and output block of the personalised emotion profile. This system generates a personalised emotion profile with the help of music clips with desired changes of emotional intensity, deep learning, as well as self-assessment from the subject.
Applications & Advantages
- A novel music-based personalised emotion profiling system.
- A novel dual-branch profiling building block (DPB) to generate the personalised emotion profiles.
- A novel dynamic profiling refinement method in DPB to give fine-grained emotional response predictions.
- Can be applied to emotion regulation and the diagnosis of certain mental disorders.