Dear list,

It is our great pleasure to announce today the official release of Open-unmix, 
a MIT-licensed python implementation for DNN-based music separation.

In the recent years, deep learning-based systems could break a long-standing 
crystal ceiling, and finally allow high-quality music separation. This provoked 
a raising interest from both the industry and the machine learning community.

However, until now, no open-source implementation was available that matches 
the performance of the best systems proposed more than four years ago. This 
lead to a waste of time from both the points of view of sheer performance 
optimization and scientific comparison with the state of the art. Indeed, while 
the best performing systems (proprietary software) are reported by latest SiSEC 
as giving 6.0 dB SDR on vocals separation, much worse performance is regularly 
reported today. Not being able to reproduce state of the art performance makes 
it difficult to clearly identify the sources for discrepancies and rooms for 
improvement.

In this context, we release Open-Unmix (UMX) as closing this gap by providing a 
reference implementation for DNN-based music separation.
It serves two main purposes. First, it is intented to academic researchers for 
serving as a baseline method that is easy to compare to and build upon. Second, 
the availability of a pre-trained model allows bringing music separation to the 
enthusiastic end users and artists.


## Paper

Open-unmix is presented in a paper that has just been published in the Journal 
of Open Source Software.
You may download the paper PDF there: 
https://joss.theoj.org/papers/571753bc54c5d6dd36382c3d801de41d

## Code

Open-unmix comes in several DNN frameworks:
* Pytorch https://github.com/sigsep/open-unmix-pytorch
* NNabla https://github.com/sigsep/open-unmix-nnabla
* tensorflow version will be released as soon as Tensorflow 2.0 is out.

## Website

* we provide extend documentation and further demos on the sigsep website.

https://sigsep.github.io/open-unmix/

## Datasets

Open-unmix has been especially designed to combine well with the following 
datasets:
* _MUSDB18_ has become one of the most popular dataset in Source Separation and 
MIR. We provide full lengths music tracks (~10h duration) of different genres 
along with their isolated drums, bass, vocals and others stems.
* _MUSDB18-HQ_: together with Open-Unmix, we also released an additional flavor 
of the dataset for models that aim to predict high bandwidth of up to 22 kHz. 
Other than that, MUSDB18-HQ is identical to MUSDB18.

=> Both datasets are available at https://sigsep.github.io/datasets/musdb.html

* Open-unmix also offers a variety of template dataset structures that should 
be appropriate for many other use cases.

https://github.com/sigsep/open-unmix-pytorch/blob/master/docs/training.md#other-datasets

__Note__:

If you want to compare separation models to existing source separation 
literature or if want compare to SiSEC 2018 participants, please use the 
standard MUSDB18 dataset, instead.



## Pre-trained models

We provide pre-trained models trained on both `MUSDB18` and `MUSDB18-HQ` that 
reach state-of-the-art performance of 6.32 dB SDR (median of medians) on vocals 
on MUSDB18 test data. This significantly outperforms any model we are aware of 
that was trained on MUSDB18 only.

The pre-trained models are automatically bundled/downloaded when using the 
pytorch implementation.

Further information for both models such as evaluation scores can be downloaded 
from zenodo:

* umx: https://doi.org/10.5281/zenodo.3370486
* umxhq: https://doi.org/10.5281/zenodo.3370489

## Tutorial

Open-unmix was recently proposed during a tutorial held at EUSIPCO 2019. This 
features:
* A recent overview into current source separation method with a focus on deep 
learning
* A lecture on spectrogram models and wiener filtering
* Visualizations and results of Open-Unmix compared to state-of-the-art

The slides of the tutorial as well as self-contained colab notebooks can be 
found on the tutorial site: 
https://sigsep.github.io/tutorials/#eusipco-2019-deep-learning-for-music-separation


# Related tools
Open-unmix is part of a whole ecosystem enabling easy research on source 
separation for Python users. Several distinct and independent projects were 
released in the recent years in this effort to make it possible for researchers 
to reproduce state of the art performance in this domain.

## norbert

A reliable python package that implements the multichannel wiener filter and 
related filtering methods.

https://github.com/sigsep/norbert

## musdb

We released the new version 0.3.0 of our popular musdb tools. This releases 
makes it simpler to use musdb inside your data loading framework thus we pro

https://github.com/sigsep/sigsep-mus-db

### museval

museval makes it easy to compare the performance of any new method under 
investigation to both Open-unmix and the participants of SiSEC18.

https://github.com/sigsep/sigsep-mus-eval

## UMX-Pro

Please note that we are also working on some version of _open-unmix_ that has 
been trained on a significantly larger dataset and that achieves unprecedented 
separation performance (~7.5 dB vocals SDR today).
Please feel free to contact us for demonstrations / industrial collaborations / 
licensing on this matter.

We look forward to your feedback and we hope that you will find Open-unmix 
useful!

Fabian-Robert Stöter & Antoine Liutkus

## Reference

If you use open-unmix for your research, please cite it through the following 
reference:

https://doi.org/10.21105/joss.01667

article{stoter19, 
  author        = {F.-R. St\\"oter and
                   S. Uhlich and
                   A. Liutkus and
                   Y. Mitsufuji}, 
  title         = {Open-Unmix - A Reference Implementation
                   for Music Source Separation},
  journal       = {Journal of Open Source Software}, 
  year          = 2019,
  doi           = {10.21105/joss.01667},
  url           = {https://doi.org/10.21105/joss.01667}
}

_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to