digital-garden-anabasis/pages/tools.ai.magenta.md

2.2 KiB

tools.ai.magenta

101

Articles

Toolsets

NSynth

ToneTransfer

j_u_s_t_i_n

The model we use for pitch detection (SPICE: https://blog.tensorflow.org/2020/06/estimating-pitch-with-spice-and-tensorflow-hub.html) extracts the dominant pitch. The known limitation at this point is that it doesn't work well with polyphonic audio, and if you try to extract pitch from audio with multiple concurrent pitches (including microtones), you'll find that the estimation will bounce around between pitches. There are some configuration options in the colab (https://colab.sandbox.google.com/github/magenta/ddsp/blob/master/ddsp/colab/demos/timbre_transfer.ipynb#scrollTo=SLwg1WkHCXQO) that you can use to filter out pitches by amplitude.

Training

This notebook demonstrates how to install the DDSP library and train it for synthesis based on your own data using our command-line scripts. If run inside of Colab, it will automatically use a free Google Cloud GPU. At the end, you'll have a custom-trained checkpoint that you can download to use with the DDSP Timbre Transfer Colab.

Drums Transcription

Making an Album with Music Transformer

TPU


#incubation

https://github.com/Catsvilles/lofigen-magenta