A general-purpose deep learning approach to model time-varying audio effects

View the Project on GitHub mchijmma/modeling-time-varying

Audio examples for the paper:

Martínez Ramírez M. A., Benetos, E. and Reiss J. D., “A general-purpose deep learning approach to model time-varying audio effects” in the 22nd International Conference on Digital Audio Effects (DAFx-19), Birmingham, UK, September 2019.

View the source code: CRAFx -> Models.py

chorus

- input
- target
- output

 

flanger

- input
- target
- output

 

phaser

- input
- target
- output

 

tremolo

- input
- target
- output

 

vibrato

- input
- target
- output

 

auto-wah

- input
- target
- output

 

ring-modulator

- input
- target
- output

 

Leslie speaker (left channel)

- input
- target
- output

 

Leslie speaker (right channel)

- input
- target
- output

 

chorus+overdrive

- input
- target
- output

 

flanger+overdrive

- input
- target
- output

 

phaser+overdrive

- input
- target
- output

 

tremolo+overdrive

- input
- target
- output

 

vibrato+overdrive

- input
- target
- output

 

auto-wah+overdrive

- input
- target
- output

 

auto-wah (envelope follower)

- input
- target
- output

 

compressor

- input
- target
- output

 

multiband compressor

- input
- target
- output

 

Citation

@inproceedings{martinez2019general,
title={A general-purpose deep learning approach to model time-varying audio effects},
author={Mart'{i}nez Ram'{i}rez, Marco A, and Benetos, Emmanouil and Reiss, Joshua D},
booktitle={22nd International Conference on Digital Audio Effects (DAFx-19)},
month = {September},
year = {2019},
location = {Birmingham, UK}
}