Alessandro Saccoia <alessandro.sacc...@gmail.com> wrote:
> http://web.archive.org/web/20060513150136/http://archive.chipcenter.com/dsp/DSP000315F1.html
> The images haven't been archived, but you could still find it a useful
> reference.

This link includes the images:
http://web.archive.org/web/20010210052902/http://www.chipcenter.com/dsp/DSP000315F1.html

This method is also discussed in Crochiere & Rabiner's Multirate Digital
Signal Processing book, but it didn't make sense to me there either - I'm
assuming this is my problem, not theirs. Apparently this method windows
the input with a window of size N = 4K, then intentionally time-aliases
the signal by stacking and adding it in blocks of K samples, then takes
the FFT of the time-aliased sequence. On the synthesis side, it takes the
inverse FFT, periodically extends the result, applies a synthesis window
and overlap adds. The periodic extension is the transpose of the windowing
and aliasing in the analysis process, which fixes everything somehow...?

I'm afraid to try this, because it doesn't make any damn sense, and if it
works it might make my brain explode. Supposedly, if the lengths of the
analysis and synthesis windows are <= the size of the transform, this
simplifies to the basic Crochiere, Griffin/Lim WOLA method we all know and
love.

I'm curious what they're on about with this, but not quite curious enough
to try it, since it can't possibly work unless it does. Maybe it yields
perfect reconstruction so long as you don't listen to the output. Anyway,
I'm hoping someone will tell me it sounds great and makes everything all
better in the time, frequency, and efficiency domains.


--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to