Hi Sebastian,

> Do you think that it would suffice to change the packet size at my last
RFNoC block before the host? I will try out the already available
packet_resizer block tomorrow.

Yes, this is probably the easiest solution. But, if you're not opposed to
custom HDL, an alternate option could be to create a modified FFT block
that simply outputs an integer number of FFTs within a single packet.

> So the question would be if RFNoC can handle passing packets with spp=64
at 200 MSps between RFNoC blocks

That's a good question... RFNoC blocks all share a crossbar, which runs at
a particular bus_clk rate, so there is a max throughput that the bus can
handle... Each sample on the crossbar is 8 bytes, so you get a total
throughput of bus_clk*8 bytes/second. There's also a header overhead of 16
bytes per packet (or 8 bytes if there's no timestamp).

I'm actually not sure what the current X310 bus_clk rate is set to... I
just noticed a recent commit that supposedly changes bus_clk to 187.5 MHz (
https://github.com/EttusResearch/fpga/commit/d08203f60d3460
a170ad8b3550b478113b7c5968). So I'm not exactly clear what the bus_clk was
set to before that, or on the rfnoc-devel branch...

But unless I'm misunderstanding, having multiple RFNoC blocks running at a
full 200 Msps might saturate the bus? Is that correct?

EJ

On Thu, Mar 22, 2018 at 3:33 PM, Sebastian Leutner via USRP-users <
usrp-users@lists.ettus.com> wrote:

> Hi all,
>>>>>
>>>>> when working with RFNoC at 200 MSps on the X310 using 10GbE I
>>>>> experience overruns when using less than 512 samples per packet (spp).
>>>>> A simple flow graph [RFNoC Radio] -> [RFNoC FIFO] -> [Null sink] with
>>>>> the spp stream arg set at the RFNoC Radio block shows the following
>>>>> network utilization:
>>>>>
>>>>> spp | throughput [Gbps]
>>>>> ------------------------
>>>>> 1024 | 6.49
>>>>> 512 | 6.58
>>>>> 256 | 3.60
>>>>>  64 | 0.70
>>>>>
>>>>> Although I understand that the total load will increase a little bit
>>>>> for smaller packets due to increased overhead (headers) as seen from
>>>>> spp=1024 to spp=512, I find it confusing that so many packets are
>>>>> dropped for spp <= 256.
>>>>>
>>>>> Total goodput should be 200 MSps * 4 byte per sample (sc16) = 800 MBps
>>>>> = 6.40 Gbps.
>>>>>
>>>>> Is RFNoC somehow limited to a certain number of packets per second
>>>>> (regardless of their size)?
>>>>> Could this be resolved by increasing the STR_SINK_FIFOSIZE noc_shell
>>>>> parameter of any blocks connected to the RFNoC Radio?
>>>>>
>>>>> I would like to use spp=64 because that is the size of the RFNoC FFT I
>>>>> want to use. I am using UHD 4.0.0.rfnoc-devel-409-gec9138eb.
>>>>>
>>>>> Any help or ideas appreciated!
>>>>>
>>>>> Best,
>>>>> Sebastian
>>>>>
>>>>> This is almost certainly an interrupt-rate issue having to do with your
>>>> ethernet controller, and nothing to do with RFNoC, per se.
>>>>
>>>> If you're on Linux, try:
>>>>
>>>> ethtool --coalesce <device-name-here>  adaptive-rx on
>>>> ethtool --coalesce <device-name-here> adaptive-tx on
>>>>
>>>
>>> Thanks Marcus for your quick response. Unfortunately, that did not help.
>>> Also, `ethtool -c enp1s0f0` still reports "Adaptive RX: off TX: off"
>>> afterwards. I also tried changing `rx-usecs` which reported correctly but
>>> did not help either. I am using Intel 82599ES 10-Gigabit SFI/SFP+
>>> controller with the driver ixgbe (version: 5.1.0-k) on Ubuntu 16.04.
>>>
>>> Do you know anything else I could try?
>>>
>>> Thanks,
>>> Sebastian
>>>
>> The basic problem is that in order to achieve good performance at
>> very-high sample-rates, jumbo-frames are required, and using a very small
>> SPP implies
>>    very small frames, which necessarily leads to poor ethernet
>> performance.
>>
>> Do you actually need the FFT results to appear at the host at "real-time"
>> rates, or can you do an integrate-and-dump within RFNoC, to reduce host side
>>    traffic?
>>
>
> Yes, I need all the samples. Since it will be a full receiver
> implementation in RFNoC the output to the host will be much less than 6.40
> Gbps but still a decent amount and definitely more than the 0.7 Gbps I was
> able to achieve with spp=64.
>
> Do you think that it would suffice to change the packet size at my last
> RFNoC block before the host? I will try out the already available
> packet_resizer block tomorrow.
>
> So the question would be if RFNoC can handle passing packets with spp=64
> at 200 MSps between RFNoC blocks. If this is likely to be a problem, I
> could try wrapping all my HDL code into one RFNoC block and handle the
> packet resizing at input and output of this block. However, I would like to
> avoid this step if possible.
>
> Thanks for your help!
>
>
> _______________________________________________
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to