I thought I would join in a little. I was the PI for the short time we were
funded to research this area. Richard was the initial idea man and did much of
the work. Russell Bradford did the complex mathematics. In total there were 5
papers on this topic(BibTeX entries below).
The basic sliding DFT was implemented in C and again adapted for use in Csound
where it still exists. Russell produced the GPU version in the last paper as
again standalone. Since then as our work was not funded beyond the initial 12
months my colleagues at National University of Ireland, Maynooth took some
interest and implemented a GPU version as an experimental module in Csound
where it continues to be distributed.
When our funding dried up we had plans to implement the Constant Q DFT on a
GPU. Our calculations indicated that the commodity GPU we used would allow
that version in real time with some headroom, and I am still disappointed that
that project never happened. I never had a GPU myself so I was relying on
Richard with a budget for this.
@InProceedings{JPF84,
author = {Russell Bradford and Richard Dobson and John ffitch},
title = {{Sliding is Smoother than Jumping}},
booktitle = {ICMC 2005 free sound},
pages = {287--290},
year = {2005},
editor = {{SuviSoft Oy Ltd, Tampere, Finland}},
organization = {Escola Superior de M\'usica de Catalunya},
note =
{\url{http://www.cs.bath.ac.uk/~jpff/PAPERS/BradfordDobsonffitch05.pdf}},
pure = {yes}
}
@InProceedings{JPF92,
author = {Russell Bradford and Richard Dobson and John ffitch},
title = {The Sliding Phase Vocoder},
booktitle = {Proceedings of the 2007 International Computer Music
Conference},
pages = {449--452},
year = 2007,
editor = {Suvisoft~Oy~Ltd},
volume = {II},
month = {August},
publisher = {ICMA and Re:New},
note = {ISBN 0-9713192-5-1},
annote = {\url{http://cs.bath.ac.uk/jpff/PAPERS/spv-icmc2007.pdf}},
a pure = {yes}
}
@InProceedings{JPF95,
author = {John ffitch and Richard Dobson and Russell Bradford},
title = {{Sliding DFT for Fun and Musical Profit}},
booktitle = {6th International Linux Audio Conference},
pages = {118--124},
year = {2008},
editor = {Frank Barknecht and Martin Rumori},
address = {Kunsthochscule f\"ur Medien K\"oln},
month = {March},
organization = {LAC2008},
publisher = {Tribun EU, Gorkeho 41, Bruno 602 00},
note = {ISBN 978-80-7399-362-7},
annote = {\url{http://lac.linuxaudio.org/2008/download/papers/10.pdf}},
pure = {yes}
}
@InProceedings{JPF98,
author = {Russell Bradford and Richard Dobson and John ffitch},
title = {{Sliding with a Constant $Q$}},
booktitle = {Proc. of the Int. Conf. on Digital Audio Effects (DAFx-08)},
pages = {363--369},
year = {2008},
address = {Espoo, Finland},
month = {Sep 1-4},
organization = {DAFx08},
note = {ISBN 978-951-22-9517-3},
pure = {yes}
}
@InProceedings{JPF109,
author = {Russell Bradford and John ffitch and Richard Dobson},
title = {{Real-time Sliding Phase Vocoder using a Commodity GPU}},
booktitle = {Proceedings of ICMC2011},
pages = {587--590},
year = {2011},
series = {ICMC},
month = {August},
organization = {University of Huddersfield and ICMA},
note = {ISBN 978-0-9845274-0-3},
pure = {yes}
}
As well as the Pure repository at the University of Bath I have copies of all
these papers and in some cases the presentation slides and audio examples if
anyone wants a copy.
On Thu, 19 Mar 2020, Richard Dobson wrote:
In my original C programs it was all implemented in double precision f/p,
and the results were pretty clean (but we never assessed it formally at the
time), but as the computational burden was substantial on a standard PC,
there was no way to run them in real time to perform a soak test.
However, we received some advanced (at the time) highly parallel accelerator
cards frm a Bristol company "Clearspeed" which did offer the opportunity to
perform real-time oscillator bank synthesis (by making a rudimentary VST
synth). For example, to generate band-limited square and sawtooth waves. With
single precision, and real-time generation it did not take long at all (I ran
it one time for 20mins, monitoring on an oscilloscope) for phases to degrade
and thus the waveform shape degraded. Conversely, with double precision
(which those cards fully supported, most unusually for the time), I was able
to leave it running for some hours, with no visible degradation of the
waveform or audible increase in noise.
It doesn't fully answer your question, but I hope it offers some indication
of the potential of the process.
Later on, colleagues at Bath University got the SPV fully running in real
time on Nvidia GPU cards programmed using CUDA, fed with real-time audio
input, and this was presented (I think) at either ICMC or DaFX. If John Fitch
is following this, he will be able to give more details. GPUs are definitely
the way to go for SPV in real time. I estimated (back-of-an-envelope-style)
demands of the order of 50GFlops. Of course there remain many unanswered
questions!
Richard Dobson
On 19/03/2020 16:18, Ethan Duni wrote:
On Tue, Mar 10, 2020 at 1:05 PM Richard Dobson <rich...@rwdobson.com
<mailto:rich...@rwdobson.com>> wrote:
Our ICMC paper can be found here, along with a few beguiling sound
examples:
http://dream.cs.bath.ac.uk/SDFT/
So this is pretty cool stuff. I can't say I've digested the whole idea yet,
but I had a couple of obvious questions.
In particular, the analyzer is defined by a recursive formula, and I gather
that the synthesizer effectively becomes an oscillator bank. So, are special
numerical techniques required to implement this, in order to avoid the
build-up of round-off noise over time?
Ethan
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp