(caveat - 13 years since I worked on this)

This is a real single-sample update Sliding DFT, not a block-based method. The sample comes in, and used to perform a complex rotation to each bin, followed by the frequency-domain convolution. There are no twiddle factors as such. So the rectangular window is at best implicit - I'm not sure it even has any meaning in this situation. The approach from the outset was for the goal of real-time processing - i.e. potentially for hours non-stop. We found (in the Cleaspeed project) that single-precision floats would not support that; I don't know whether anything less than double precision is required - those were the only choices available.

It's "embarrassingly parallel" as an algorithm, so very suited to dedicated massively parallel hardware. I know FPGAs are pretty powerful these days so might well do the job (but some transformations are pretty cpu-intensive too!). The Bath Uni team said they were using a "mid-range" graphic card (on a Linux workstation).

Richard Dobson

On 19/03/2020 17:45, Eric Brombaugh wrote:
Wow - interesting discussion.

I've implemented a real-time SDFT on an FPGA for use in carrier acquisition of communications signals. It was surprisingly easy to do and didn't require particularly massive resources, although FPGAs naturally facilitate a degree of low-level parallelism that you can't easily achieve in CPU-based systems.

Based on this it might be feasible to build the SPV on a modest FPGA rather than resorting to GPUs or specialized parallel CPU systems. The main stumbling block that I see was your use of double-precision floating point. If that level of accuracy is really necessary then a higher end FPGA would be needed as most mid-range devices are geared more for fixed point or single-precision floating point.

I was a bit confused by the ICMC paper when it came to windowing. The SDFT structure I'm used to seeing (as discussed in the Lyons/Jacobsen article you referenced) involves a rectangular window applied prior to the twiddle calculations using a comb-filter structure. Is this window replaced by your frequency domain convolutions, or are the cosine-based windows applied in addition to the rectangular one?

Eric

_______________________________________________
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp

Reply via email to