Modeling of nonlinear audio effects with end-to-end deep neural networks

View the Project on GitHub mchijmma/modeling-nonlinear

Audio examples for the paper:

Martínez Ramírez M. A. and Reiss J. D., “Modeling nonlinear audio effects with end-to-end deep neural networks” in the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, UK, May 2019.

View the source code: CAFx -> Models.py

 

distortion example

 

distortion

 

1st-setting

- input
- target
- output

 

2nd-setting

- input
- target
- output

 

3rd-setting

- input
- target
- output

 

overdrive

 

1st-setting

- input
- target
- output

 

2nd-setting

- input
- target
- output

 

3rd-setting

- input
- target
- output

 

EQ

 

1st-setting

- input
- target
- output

 

2nd-setting

- input
- target
- output

 

FxChain

 

1st-setting

- input
- target
- output

 

1st-setting-NSynth-dataset

- input
- target
- output

 

2nd-setting

- input
- target
- output

 

2nd-setting-NSynth-dataset

- input
- target
- output

 

3rd-setting

- input
- target
- output

 

3rd-setting-NSynth-dataset

- input
- target
- output

 

Citation

@inproceedings{martinez2019modeling,
title={Modeling nonlinear audio effects with end-to-end deep neural networks},
author={Mart'{i}nez Ram'{i}rez, Marco A and Reiss, Joshua D},
booktitle={IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
month = {May},
year = {2019},
location = {Brighton, UK}
}