thanks for your offer, I can not really read Math lab code and always
have a hard time
even figuring out essentials of such code
My phase vocoder already works kind of satisfactorily now as a demo in
Native Instruments Reaktor,
I do the forward FFT offline and the iFFT "just in time", that means 12
"butterflies" per sample,
so you could bring down latency by speeding up the iFFT, though I am not
sure what a reasonable latency is.
I made a poll on some electronic musicians board and most people voted
for 10 ms being just tolerable.
I am half way content with the way it works now, for analysis I have
twelve FFTs in parallel, one for each octave
with window sizes based on ERB scale per octave, so it's not totally bad
on transients but not good either.
I assume there is still some room for improvements on the windows, but
not very much.
FFT size is 4096, and now I search for ways to improve it, mostly
regarding transients.
But I am not sure if that's possible with FFT cause I still have
pre-ringing, and I cant see
how to avoid that completely cause you can only shorten the windows on
the low octaves so much.
Maybe with an assymetric window?
If you do the analysis with a IIR filter bank (or wavelets) you kind of
have assymmetric windows, that is the filters
integrate in a causal way with a decaying "window" they see, but I am
not sure if this can be adapted somehow
to an FFT.
An other way that would reduce reverberation and shorten transient times
somehwat would
be using shorter FFTs for the resynthesis, this would also bring down
CPU a bit and latency.
So this is where I am at at the moment
Am 09.11.2018 um 23:29 schrieb robert bristow-johnson:
i don't wanna lead you astray. i would recommend staying with the
phase vocoder as a framework for doing time-frequency manipulation.
it **can** be used real-time for pitch shift, but when i have used the
phase vocoder, it was for time-scaling and then we would simply
resample the time-scaled output of the phase vocoder to bring the
tempo back to the original and shift the pitch. that was easier to
get it right than it was to *move* frequency components around in the
phase vocoder. but i remember in the 90s, Jean Laroche doing that
real time with a single PC. also a real-time phase vocoder (or any
frequency-domain process, like sinusoidal modeling) is going to have
delay in a real-time process. even if your processor is infinitely
fast, you still have to fill up your FFT buffer with samples before
invoking the FFT. if your buffer is 4096 samples and your sample rate
is 48 kHz, that's almost 1/10 second. and that doesn't count
processing time, just the buffering time. and, in reality, you will
have to double buffer this process (buffer both input and output) and
that will make the delay twice as much. so with 1/5 second delay,
that's might be an issue.
i offered this before (and someone sent me a request and i believe i
replied, but i don't remember who), but if you want my 2001 MATLAB
code that demonstrates a simple phase vocoder doing time scaling, i am
happy to send it to you or anyone. it's old. you have to turn
wavread() and wavwrite() into audioread() and audiowrite(), but
otherwise, i think it will work. it has an additional function that
time-scales each sinusoid *within* every frame, but i think that can
be turned off and you can even delete that modification and what you
have left is, in my opinion, the most basic phase vocoder implemented
to do time scaling. lemme know if that might be helpful.
L8r,
r b-j
---------------------------- Original Message ----------------------------
Subject: Re: [music-dsp] 2-point DFT Matrix for subbands Re: FFT for
realtime synthesis?
From: "gm" <g...@voxangelica.net>
Date: Fri, November 9, 2018 5:02 pm
To: music-dsp@music.columbia.edu
--------------------------------------------------------------------------
> You get me intrigued with this
>
> I actually believe that wavelets are the way to go for such things,
> but, besides that anything beyond a Haar wavelet is too complicated
for me
> (and I just grasp that Haar very superficially of course),
>
> I think one problem is the problem you mentioned - don't do anything
> with the bands,
> only then you have perfect reconstruction
>
> And what to do you do with the bands to make a pitch shift or to
> preserve formants/do some vocoding?
>
> It's not so obvious (to me), my naive idea I mentioned earlier in this
> thread was to
> do short FFTs on the bands and manipulate the FFTs only
>
> But how? if you time stretch them, I believe the pitch goes down (thats
> my intuition only, I am not sure)
> and also, these bands alias, since the filters are not brickwall,
> and the aliasing is only canceled on reconstruction I believe?
>
> So, yes, very interesting topic, that could lead me astray for another
> couple of weeks but without any results I guess
>
> I think as long as I don't fully grasp all the properties of the FFT and
> phase vocoder I shouldn't start anything new...
>
--
r b-j r...@audioimagination.com
"Imagination is more important than knowledge."
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp
_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp