On 2022-02-11 13:53, Kuester, Dan (Fed) wrote:

Ah, I see - thanks for the insights, Marcus.

Unfortunately, we’re playing a somewhat precarious game near the edge of the STFT time-bandwidth product limit. On the one hand, we need a long enough FFT to minimize bleeding between channels. On the other, we also need to keep most of the FFT packets in any given ~1ms time window in order to maintain sufficient statistical strength in the downstream processing, which filters out short pulses that may sometimes be present in the band.

Unfortunately, as a result the FPGA will probably not be very useful to us with “Keep 1 in N.” However, we might be able to get away with buffering a large number of FFT windows into 1 ms blocks, and then do the vector sum on the host. Is there a way to “trigger” bursts like this? I suppose we could just allow the buffer overflow to do the rate limiting for us :-)

Dan

This sounds like you might want to write your own custom RFNoC block do to some of the things you want to do.

You might also look into the PFB channelizer RFNOC work that others on this list have done--they generally have better
  side-band performance than a straight FFT.


*From:* Marcus D. Leech <[email protected]>
*Sent:* Friday, February 11, 2022 9:33 AM
*To:* [email protected]
*Subject:* [USRP-users] Re: RFNoC and time vs frequency averaging

On 2022-02-11 10:38, Kuester, Dan (Fed) via USRP-users wrote:

    Hi everyone,

    I’m hoping for some advice on using RFNoC for a spectrum analysis
    application (I have another hardware clocking question that I’m
    going to ask separately).

    Context: we need to continuously stream channel power in a bank of
    8 contiguous 10 MHz bands on short (a few microseconds) time
    scales. To manage the initial deluge of IQ, I’d like to use an
    FPGA to perform a 512-point FFT and then reduce the volume of data
    by summing up mag^2 across frequency to give channel power. The
    resulting stream of 8 channel power readings every few
    microseconds is then pretty manageable for transport and
    processing on the host.

    After looking at the RFNoC block list, the (wishful thinking? :-))
    implementation in my head looks like this:

    (Radio) → Window → FFT (mag^2 output) → Vector IIR to sum across
    frequency bins → Keep 1 in N → (Stream to host)

    Some questions have come up on this:

     1. Does the Vector IIR at the output of an FFT operate across
        time or frequency? For “Keep 1 in N,” there’s a clear flag to
        determine whether the operation is applied by sample or by
        packet, but I don’t see anything about which of these “Moving
        Average” “Vector IIR” operate on.
     2. Are there any obvious fixed-point traps in doing this?
     3. Are there any other pitfalls that I’m missing here?

    Thanks in advance for any ideas!

The vector IIR operates across time, so it cannot be used to reduce the effective frequency resolution of the FFT, as you suggest.

Why not simply use the smallest FFT possible in the FPGA, with mag**2 outputs, then vector IIR that, then keep-one-in-N, then do the
  resolution reduction on the host at a now much-more-modest rate?

For example, a 64-point FFT with 100MHz input rate gives you 1.56e6 FFT outputs/second.  IIR and drop this to perhaps 1/8 of that, and even   an rPi4 should be able to do the vector sum operation to reduce effective resolution.

_______________________________________________
USRP-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to