| --- |
| license: mit |
| --- |
| |
| # MusER: Musical Element-Based Regularization for Generating Symbolic Music with Emotion |
|
|
| <!-- Provide a quick summary of what the model is/does. --> |
|
|
| Here is the training checkpoints of **[MusER (AAAI'24)](https://arxiv.org/abs/2312.10307)** |
|
|
| ## Overview |
|
|
| MusER employs musical element-based regularization in the latent space to disentangle distinct musical elements, investigate their roles in distinguishing emotions, and further manipulate elements to alter musical emotions. |
| <img src="MusER.png" width="770" height="300" alt="model"/> |
|
|
| ## Model Sources |
|
|
| <!-- Provide the basic links for the model. --> |
|
|
| - **Repository:** https://github.com/Tayjsl97/MusER |
| - **Demo:** [demo page](https://tayjsl97.github.io/demos/aaai) |
|
|
| ## Citation |
| If you use our models in your research, please cite it as follows: |
| ```bib |
| @inproceedings{ji2024muser, |
| title={MusER: Musical Element-Based Regularization for Generating Symbolic Music with Emotion}, |
| author={Ji, Shulei and Yang, Xinyu}, |
| booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, |
| volume={38}, |
| number={11}, |
| pages={12821--12829}, |
| year={2024} |
| } |