Abstract
We propose a system for mapping arbitrary percussive sound gestures to high-fidelity drum recordings. Our system, dubbed TRIA (The Rhythm In Anything), takes as input two audio prompts -- one specifying the desired drum timbre, and one specifying the desired rhythm -- and generates audio satisfying both prompts (i.e. playing the desired rhythm with the desired timbre). TRIA can synthesize realistic drum audio given rhythm prompts from a variety of non-drum sound sources (e.g. beatboxing, environmental sound) in a zero-shot manner, enabling novel creative interactions.
TRIA is trained as a masked language model to generate neural codec tokens given contextual tokens extracted from the timbre prompt and rhythm features extracted from the rhythm prompt. For our rhythm features, we adaptively split a spectrogram into two equal-energy bands and perform normalization and quantization. During training, we augment audio with pitch shift, noise, and other distortions prior to rhythm feature extraction, allowing our system to process a wide variety of rhythm prompts under realistic recording conditions.
We provide examples of TRIA processing selected timbre and rhythm prompts to create new output audio below.
Audio Examples
# | Timbre Prompt | Rhythm Prompt | Output |
---|---|---|---|
1 | |||
2 | |||
3 | |||
4 | |||
5 | |||
6 | |||