Everyone is so helpful! Thank you.

I guess the answer to my first question is "probably." That is good news.

Gerry

On 1/20/2014 9:51 PM, Ryan Monroe wrote:

I've done a 256 kpoint biplex fft in a virtex 7. You could probably do 128kpoint in a v6. By combining biplex channels this probably means a 512 kpoint fully on-chip fft, with no pfb and a little ip development for the direct form part to work.

You can get an idea of how much a pfb will cost by taking (pfb_len*[ntaps-1])~=effective_fft_len which applies for memory utilization. (Which should be your limiting factor) This estimate breaks down for high bandwidth applications, and you'll need to do a few things on your fft to be that efficient. I can talk more if you're interested.

On Jan 20, 2014 9:43 PM, "Jason Manley" <[email protected] <mailto:[email protected]>> wrote:

    There was a bug preventing FFTs over 2^16 being compiled. I
    haven't retried this after Andrew's mods but hopefully this is
    fixed. You will run out of BRAM trying to compile very large PFBs.
    It's easier to use the 2-D approach that Dan describes, if you can
    accommodate the weird spectral/channel artefacts that this
    introduces...

    I know of at least one spectrometer built as Dan describes, using
    a 256x4096 channel PFB (i.e. 1M channels) over ~1GHz BW and there
    were plenty of FPGA resources left for building larger. This
    design was limited by external memory, which was being used for
    other things too (you'll need a big VACC!). Also to note, the
    readout speeds become rather fast at these high resolutions.

    Jason Manley
    CBF Manager
    SKA-SA

    Cell: +27 82 662 7726 <tel:%2B27%2082%20662%207726>
    Work: +27 21 506 7300 <tel:%2B27%2021%20506%207300>

    On 21 Jan 2014, at 7:13, Dan Werthimer <[email protected]
    <mailto:[email protected]>> wrote:

    >
    >
    > hi gerry,
    >
    >
    > we haven't tried this, but i think the largest spectrometer you
    > could fit on a  roach2  is 256M points, implemented by a 16K
    point FFT,
    > followed by DRAM based corner turn and twiddle factors,
    > followed by another 16K point FFT.
    >
    > if you have this many channels in your correlator,
    > you also be running up near the correlator X engine memory limits:
    >
    > for instance, if you cross correlate in a Titan GPU, then you
    only have
    > 5 or 6 GB of memory on each GPU card.
    >
    > let's assume you have a max of 32 GPU's for your X engine.
    >
    > then max frequency channels =
    >
    > 32 GPU's   x   6GB/GPU  x  42^2 baselinepols  x  4B/baseline
    >
    > = 435M channels max for 32 GPU's  (round down to 256M max channels)
    >
    >
    > if you cross correlate in a CPU (eg: DiFX) then you can have
    more memory,
    > but you'll need a lot more CPU's to keep up with the data rate,
    so CPU's
    > won't help.
    >
    > be wary of readout rate too - that's a lot of data to read out :
    >
    > 256M channels  x  42^2 baselinepols  x  4B =  1 TB  every
    integration time
    >
    >
    >
    > best wishes,
    >
    > dan
    >
    >
    >
    >
    > On Mon, Jan 20, 2014 at 7:53 PM, Gerry Harp <[email protected]
    <mailto:[email protected]>> wrote:
    > Hi
    >
    > Just for fun, how large of an FFT (filter bank) can fit into one
    of the Roach# boards? Has anyone ever successfully compiled a
    filter bank with length 2^17? We're interested in building a
    relatively narrow-band correlator so we need lots of channels. Any
    experience at large lengths or educated guesses are welcome. Also,
    how fast did it go? Possible to keep up with 100 MSPS?
    >
    > It is proposal time, once more...
    >
    > Thanks
    >
    > Gerry Harp
    >
    >
    > On 1/17/2014 11:56 AM, Dan Werthimer wrote:
    >
    >
    >
    >
    >                   Dear Casper Collaborators,
    >
    >
    > We hope you can attend this year's Casper Worshop
    >
    >                       in Berkeley, California
    >
    >                 June 9 throuh June 13, 2014
    >
    >
    >
    >
    > We'll have more information later about registration,
    > travel, abstracts, etc, but for now, please reserve these dates.
    >
    >
    > Hoping you can participate,
    >
    >
    > Dan and the Scientific and Local Organizing Committees
    >
    >
    >
    >
    >
    >
    >
    > --
    > ----------------------
    > Gerald R. Harp, Ph.D.
    > Director, Center for SETI Research
    > SETI Institute
    >
    >
    >



--
----------------------
Gerald R. Harp, Ph.D.
Director, Center for SETI Research
SETI Institute

Reply via email to