|paper|
Chenyu Yang, Shuai Wang, Hangting Chen*, Jianwei Yu*, Wei Tan, Rongzhi Gu, Yaoxun Xu, Yizhi Zhou, Haina Zhu, Haizhou Li
Abstract The emergence of novel generative modeling paradigms, particularly audio language models, has significantly advanced the field of song generation. Although state-of-the-art models are capable of synthesizing both vocals and accompaniment tracks up to several minutes long concurrently, research about partial adjustments or editing of existing songs is still underexplored, which allows for more flexible and effective production. In this paper, we present SongEditor, the first song editing paradigm that introduces the editing capabilities into language-modeling song generation approaches, facilitating both segment-wise and track-wise modifications. SongEditor offers the flexibility to adjust lyrics, vocals, and accompaniments, as well as synthesizing songs from scratch. The core components of SongEditor include a music tokenizer, an autoregressive language model, and a diffusion generator, enabling generating an entire section, masked lyrics, or even separated vocals and background music. Extensive experiments demonstrate that the proposed SongEditor achieves exceptional performance in end-to-end song editing, as evidenced by both objective and subjective metrics.
Ethic Claim The examples (including both audio prompts and some lyrics) occurred in the demo page are intercepted from public online platforms (such as YouTube, etc.). These examples are not included in our training set. We claim that their use is for demonstration purposes only and we will never use them for any commercial purposes. We respect the copyright and intellectual property rights of all creators. If you believe that some examples used in the demo page violate your rights, please contact us by raising an issue on our GitHub project. We will take your feedback seriously and remove them as soon as possible. Thank you for your understanding and support.