Skip to main content

Research Repository

Advanced Search

Generating emotional music based on improved C-RNN-GAN

Shi, Xuneng; Vear, Craig

Authors

Xuneng Shi



Abstract

This study introduces an emotion-based music generation model built upon the foundation of C-RNN-GAN, incorporating conditional GAN, and utilizing emotion labels to create diverse emotional music. Two evaluation methods were employed in this study to assess the quality and emotional expression of the generated music. Objective evaluation utilized metric calculations, comparing the generated music to the music in the EMOPIA database, including factors like note range, chord count, and chord consistency. Additionally, subjective assessment involved inviting 20 listeners to hear a set of both real and generated music. Listeners were asked to distinguish between real and generated music and evaluate emotional expression and harmony. The results indicate that the model-generated music successfully conveys a variety of emotions and approaches the quality of human-composed music.

Citation

Shi, X., & Vear, C. (2024, April). Generating emotional music based on improved C-RNN-GAN. Presented at 13th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART) 2024, Aberystwyth University, Wales

Presentation Conference Type Edited Proceedings
Conference Name 13th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART) 2024
Start Date Apr 3, 2024
End Date Apr 5, 2024
Acceptance Date Jan 22, 2024
Online Publication Date Mar 29, 2024
Publication Date Mar 29, 2024
Deposit Date Jan 22, 2024
Publicly Available Date Mar 30, 2025
Publisher Springer
Peer Reviewed Peer Reviewed
Volume 14633 LNCS
Pages 357-372
Book Title Lecture Notes in Computer Science
ISBN 9783031569913
DOI https://doi.org/10.1007/978-3-031-56992-0_23
Keywords AI music composition; Deep learning; C-RNN-GAN; Music emotion; Controlled music generation
Public URL https://nottingham-repository.worktribe.com/output/30109930
Publisher URL https://www.evostar.org/2024/evomusart/
Additional Information First Online: 29 March 2024; Conference Acronym: EvoMUSART; Conference Name: International Conference on Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar); Conference City: Aberystwyth; Conference Country: United Kingdom; Conference Year: 2024; Conference Start Date: 3 April 2024; Conference End Date: 5 April 2024; Conference Number: 13; Conference ID: evomusart2024; Conference URL: https://www.evostar.org/2024/evomusart/; Type: Double-blind; Conference Management System: Easychair; Number of Submissions Sent for Review: 55; Number of Full Papers Accepted: 17; Number of Short Papers Accepted: 8; Acceptance Rate of Full Papers: 31% - The value is computed by the equation "Number of Full Papers Accepted / Number of Submissions Sent for Review * 100" and then rounded to a whole number.; Average Number of Reviews per Paper: 3; Average Number of Papers per Reviewer: 3; External Reviewers Involved: No

Files

This file is under embargo until Mar 30, 2025 due to copyright restrictions.




You might also like



Downloadable Citations