Since the early 70s, synthesizers have tried to recreate musical instruments but often end up sounding unrealistic. Modern computers have made it easy to sample high quality recordings of every note of an instrument, but they lead to millions of recordings just for a single instrument. Tone Transfer uses ML to directly learn the tone of a musical instrument. The technology can extract relationships between pitch, tone, and volume from very small amounts of data, and form a high quality and expressive audio synthesis model of the output instrument. The model is flexible and can be controlled by inputs such as pitch and volume.
HOW DOES IT WORK?
What makes Tone Transfer special lies in how we compressed the model to the point that it can run completely in-browser using tensorflow.js. This means Tone Transfer scales in a serverless manner, putting the power of state-of-the-art on everyone’s laptops and phones. Earlier implementations of DDSP required a colab notebook and a GPU to run inference, and didn’t have the intuitive UI our team developed for this product.
PRESS COVERAGE
Google Keyword:
What if you could turn your voice into any instrument?