mirror of
https://gitlab.com/sdbs_cz/digital-garden-anabasis.git
synced 2025-01-22 22:47:05 +01:00
40 lines
2 KiB
Markdown
40 lines
2 KiB
Markdown
|
[[incubation.audio.sample.managment]]
|
||
|
[[tools.tts]]
|
||
|
|
||
|
|
||
|
|
||
|
- https://en.wikipedia.org/wiki/Festival_Speech_Synthesis_System
|
||
|
- https://en.wikipedia.org/wiki/ESpeak
|
||
|
|
||
|
- - https://www.isi.edu/~carte/e-speech/synth/index.html
|
||
|
|
||
|
|
||
|
- https://en.wikipedia.org/wiki/Sinsy
|
||
|
- > **Sinsy** (**Sin**ging Voice **Sy**nthesis System) (しぃんしぃ) is an online [Hidden Markov model](https://en.wikipedia.org/wiki/Hidden_Markov_model "Hidden Markov model") (HMM)-based singing voice synthesis system by the [Nagoya Institute of Technology](https://en.wikipedia.org/wiki/Nagoya_Institute_of_Technology "Nagoya Institute of Technology") that was created under the [Modified BSD license](https://en.wikipedia.org/wiki/BSD_licenses "BSD licenses").[](https://en.wikipedia.org/wiki/Sinsy#cite_note-1)
|
||
|
- http://www.sinsy.jp/
|
||
|
|
||
|
|
||
|
- https://www.audiolabs-erlangen.de/resources/MIR/2015-ISMIR-LetItBee
|
||
|
- http://labrosa.ee.columbia.edu/hamr_ismir2014/proceedings/doku.php?id=audio_mosaicing
|
||
|
|
||
|
- https://www.danieleghisi.com/phd/PHDThesis_20180118.pdf
|
||
|
- https://musicinformationretrieval.com/
|
||
|
|
||
|
- https://colab.research.google.com/github/stevetjoa/musicinformationretrieval.com/blob/gh-pages/nmf_audio_mosaic.ipynb
|
||
|
- https://soundlab.cs.princeton.edu/research/mosievius/
|
||
|
- http://imtr.ircam.fr/imtr/Corpus_Based_Synthesis
|
||
|
- max/msp
|
||
|
- http://imtr.ircam.fr/imtr/Diemo_Schwarz
|
||
|
- https://github.com/benhackbarth/audioguide
|
||
|
- OSX but promising
|
||
|
|
||
|
#### Audiostellar
|
||
|
- https://audiostellar.xyz/
|
||
|
- https://www.arj.no/tag/sox/
|
||
|
- cataRT
|
||
|
|
||
|
|
||
|
|
||
|
#### timbreIDLib Pure Data
|
||
|
https://github.com/wbrent/timbreIDLib
|
||
|
>timbreIDLib is a library of audio analysis externals for Pure Data. The classification external [timbreID] accepts arbitrary lists of audio features and attempts to find the best match between an input feature and previously stored training instances. The library can be used for a variety of real-time and non-real-time applications, including sound classification, sound searching, sound visualization, automatic segmenting, ordering of sounds by timbre, key and tempo estimation, and concatenative synthesis.
|