Thanks Michael,
This info was very helpful.

Regarding "recv_buff_size", I tried setting to 100M and received a warning
that it could not do so because rmem_max was only 33M.  Given that my
rmem_max was set all along to 33M, would the recv_buff_size default to 33M
or does it default to something lower such that I still need to set this
device arg?

Regarding cpufrequtils, I have done everything I can find to get the CPUs
to stay at 3.5GHz.  On Ubuntu 14.04, this worked well.  And, I have tried
to disable the intel_pstate driver with the appropriate grub setting, but I
have not been successful in Ubuntu 18.04 at keeping the CPU freqs max-ed.

Finally, regarding DPDK, this seems like the way to go, but with the
limited amount of info available, it is difficult to get this properly
configured.

Rob


On Mon, Sep 9, 2019 at 5:43 PM Michael West <michael.w...@ettus.com> wrote:

> Hi Rob,
>
> I would recommend not using the DMA FIFO block.  Although the DMA FIFO
> block should work, setting a larger socket buffer on the host or using DPDK
> are much better options.  To use a larger socket buffer, just use the
> device argument "recv_buff_size=<size>" and set the <size> to something
> reasonably large.
>
> As far as the Ds, there is flow control between the device and host, but
> drops are still possible between the NIC and system memory if the host is
> not releasing descriptors to the NIC fast enough.  For some network cards,
> this can be seen by looking at "rx_missed_errors" value in the output of
> 'ethtool -S <interface>'.  Increasing the number of RX descriptors helps,
> but is limited.  Use 'sudo ethtool -G <interface> rx 4096' to set the
> descriptors to the maximum value.
>
> For the cpufreq utils, you may have to set the governor on each core (i.e.
> cpufreq-set -g performance -c <core>).  Also, if you have the intel_pstate
> driver, it still may vary the CPU frequency with the performance governor.
>
> Regards,
> Michael
>
> On Mon, Sep 9, 2019 at 1:41 PM Rob Kossler via USRP-users <
> usrp-users@lists.ettus.com> wrote:
>
>> Hi Nate,
>> I looked at the link you sent (performance tuning tips) and your email.
>> Here are a few comments / questions:
>>
>>    - Regarding my initial question, what could be the cause of WORSE
>>    performance when I inserted the DmaFIFO in the receive chain of my RFNoC
>>    graph? Recall the "Radio->DDC->host" produces no errors, but
>>    "Radio->DDC->DmaFIFO->host" produces errors (timeouts)
>>    - Regarding "cpufrequtils" (from the performance tuning tips), I have
>>    run the suggestions on my 18.04 Ubuntu system (Xeon E5-1620v4 3.5GHz,
>>    4-core/8-thread), but when I run cpufreq-info, there is often 1 or more
>>    CPUs that show up at 1.6 GHz or so (while the majority report ~3.6 GHz).
>>    It is not clear to me whether this utility is doing its job or not.
>>    - Regarding DPDK, I have tried to install it, but have had no
>>    success.  The instructions say that after updating grub with "iommu=pt
>>    intel_iommu=on hugepages=2048", then "After you reboot, you should see
>>    /sys/kernel/iommu_groups populated".  I do have such a folder, but it is
>>    empty so I'm not sure if this step was successful or not.  Furthermore, I
>>    am unable to run the dpdk-devbind python script to bind the vfio-pci 
>> driver
>>    to my Intel X520-DA2 NIC (see error message below)
>>    - Regarding XFS vs EXT4, this is something I haven't tried yet, but
>>    plan to.  I am completely unfamiliar with XFS.  My SSD is actually 4
>>    Samsung EVO 850 SATA SSDs in a software RAID-0 (using mdadm).  If I copy a
>>    huge file from my RAM disk to the SSD, I am able to verify transfer rates
>>    greater than 1GB/s (I believe closer to 1.5GB/s).
>>    - Finally, regarding "D" (sequence errors), what is the possible
>>    cause?  These are the most frustrating errors because their cause is a
>>    mystery to me.  I fully expect that when my host PC is too slow to keep up
>>    with the torrent of data coming from the USRP that it should eventually
>>    backpressure all the way to the Radio which will then generate Overflows
>>    because it has no place to send the A/D data.  So, if I was only seeing
>>    "O", it would make sense to me.  But, the "D" makes no sense to me in my
>>    point-to-point direct connection between host and USRP.  Do you know of 
>> any
>>    root cause for "D"?
>>
>> Thanks.
>> Rob
>>
>> *DPDK error messages during dpdk-devbind.py*
>> irisheyes0@irisheyes0-HP-Z440-Workstation:~$
>> /usr/share/dpdk/usertools/dpdk-devbind.py --status
>>
>> Network devices using DPDK-compatible driver
>> ============================================
>> <none>
>>
>> Network devices using kernel driver
>> ===================================
>> 0000:00:19.0 'Ethernet Connection (2) I218-LM 15a0' if=eno1 drv=e1000e
>> unused= *Active*
>> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens4f0 drv=ixgbe unused=
>> 0000:01:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens4f1 drv=ixgbe unused=
>> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens2f0 drv=ixgbe unused=
>> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens2f1 drv=ixgbe unused=
>>
>> Other Network devices
>> =====================
>> <none>
>>
>> Crypto devices using DPDK-compatible driver
>> ===========================================
>> <none>
>>
>> Crypto devices using kernel driver
>> ==================================
>> <none>
>>
>> Other Crypto devices
>> ====================
>> <none>
>>
>> Eventdev devices using DPDK-compatible driver
>> =============================================
>> <none>
>>
>> Eventdev devices using kernel driver
>> ====================================
>> <none>
>>
>> Other Eventdev devices
>> ======================
>> <none>
>>
>> Mempool devices using DPDK-compatible driver
>> ============================================
>> <none>
>>
>> Mempool devices using kernel driver
>> ===================================
>> <none>
>>
>> Other Mempool devices
>> =====================
>> <none>
>> irisheyes0@irisheyes0-HP-Z440-Workstation:~$ sudo
>> /usr/share/dpdk/usertools/dpdk-devbind.py --bind=vfio-pci 01:00.0
>> [sudo] password for irisheyes0:
>> Error - no supported modules(DPDK driver) are loaded
>> irisheyes0@irisheyes0-HP-Z440-Workstation:~$
>> /usr/share/dpdk/usertools/dpdk-devbind.py --status
>>
>> Network devices using DPDK-compatible driver
>> ============================================
>> <none>
>>
>> Network devices using kernel driver
>> ===================================
>> 0000:00:19.0 'Ethernet Connection (2) I218-LM 15a0' if=eno1 drv=e1000e
>> unused= *Active*
>> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens4f0 drv=ixgbe unused=
>> 0000:01:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens4f1 drv=ixgbe unused=
>> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens2f0 drv=ixgbe unused=
>> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens2f1 drv=ixgbe unused=
>>
>> Other Network devices
>> =====================
>> <none>
>>
>> Crypto devices using DPDK-compatible driver
>> ===========================================
>> <none>
>>
>> Crypto devices using kernel driver
>> ==================================
>> <none>
>>
>> Other Crypto devices
>> ====================
>> <none>
>>
>> Eventdev devices using DPDK-compatible driver
>> =============================================
>> <none>
>>
>> Eventdev devices using kernel driver
>> ====================================
>> <none>
>>
>> Other Eventdev devices
>> ======================
>> <none>
>>
>> Mempool devices using DPDK-compatible driver
>> ============================================
>> <none>
>>
>> Mempool devices using kernel driver
>> ===================================
>> <none>
>>
>> Other Mempool devices
>> =====================
>> <none>
>> irisheyes0@irisheyes0-HP-Z440-Workstation:~$ sudo modprobe vfio-pci
>> irisheyes0@irisheyes0-HP-Z440-Workstation:~$
>> /usr/share/dpdk/usertools/dpdk-devbind.py --status
>>
>> Network devices using DPDK-compatible driver
>> ============================================
>> <none>
>>
>> Network devices using kernel driver
>> ===================================
>> 0000:00:19.0 'Ethernet Connection (2) I218-LM 15a0' if=eno1 drv=e1000e
>> unused=vfio-pci *Active*
>> 0000:01:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens4f0 drv=ixgbe unused=vfio-pci
>> 0000:01:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens4f1 drv=ixgbe unused=vfio-pci
>> 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens2f0 drv=ixgbe unused=vfio-pci
>> 0000:04:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=ens2f1 drv=ixgbe unused=vfio-pci
>>
>> Other Network devices
>> =====================
>> <none>
>>
>> Crypto devices using DPDK-compatible driver
>> ===========================================
>> <none>
>>
>> Crypto devices using kernel driver
>> ==================================
>> <none>
>>
>> Other Crypto devices
>> ====================
>> <none>
>>
>> Eventdev devices using DPDK-compatible driver
>> =============================================
>> <none>
>>
>> Eventdev devices using kernel driver
>> ====================================
>> <none>
>>
>> Other Eventdev devices
>> ======================
>> <none>
>>
>> Mempool devices using DPDK-compatible driver
>> ============================================
>> <none>
>>
>> Mempool devices using kernel driver
>> ===================================
>> <none>
>>
>> Other Mempool devices
>> =====================
>> <none>
>> irisheyes0@irisheyes0-HP-Z440-Workstation:~$ sudo
>> /usr/share/dpdk/usertools/dpdk-devbind.py --bind=vfio-pci 01:00.0
>> Error: bind failed for 0000:01:00.0 - Cannot bind to driver vfio-pci
>> Error: unbind failed for 0000:01:00.0 - Cannot open /sys/bus/pci/drivers
>> //unbind
>> irisheyes0@irisheyes0-HP-Z440-Workstation:~$
>>
>>
>>
>> On Fri, Sep 6, 2019 at 6:02 PM Rob Kossler <rkoss...@nd.edu> wrote:
>>
>>> Hi Nate,
>>> I'm using UHD 3.14.0.1.  I am not using DPDK.
>>>
>>> Regarding the tuning, I think I was not clear in my email.  I have no
>>> trouble streaming to RAM disk using the standard Radio->DDC->host graph.  I
>>> mentioned that I was running 2x50MS/s, but I can go up to 2x200MS/s with
>>> success.  My issue is that after adding the DmaFIFO to the Rx chain, I got
>>> timeouts (i.e., I suppose that the flow stopped for some reason) when
>>> running the graph Radio->DDC->DmaFIFO->host.  Even at 2x50MS/s.
>>>
>>> So, my question is: why is this happening?  What is wrong with my plan
>>> to insert the DmaFIFO in the Rx chain?  What would possibly cause the
>>> streaming to terminate such that my recv() loop times out (even with a 5s
>>> timeout)?
>>>
>>> Rob
>>>
>>>
>>>
>>> On Fri, Sep 6, 2019 at 12:56 PM Ettus Research Support <
>>> supp...@ettus.com> wrote:
>>>
>>>> Hi Rob,
>>>>
>>>> What version of UHD are you using?
>>>>
>>>> 2x RX 50 MS/s streams should work without much issue with a fast enough
>>>> host, especially to a ram disk.
>>>>
>>>> Are you using DPDK? DPDK support for X3xx was recently added to UHD and
>>>> will reduce the overhead on the host side, which can help quite a bit. Some
>>>> anecdotal testing I've done with a N310, with the native UHD driver, to
>>>> stream 2 channels full duplex, the minimum cpu freq I was able to run
>>>> without any flow control errors was 3.8 GHz. Using DPDK, I was able to run
>>>> 2x2 @ 125 MS/s with my CPU cores locked at 1.5 GHz with no flow control
>>>> errors. Using DPDK, it's possible to stream 2x2 @ 200e6 on the X3xx with a
>>>> SRAM FPGA image (it's not possible to TX at full rate using the native
>>>> driver and DRAM based FPGA).
>>>>
>>>> You could try the few things listed here
>>>> https://kb.ettus.com/USRP_Host_Performance_Tuning_Tips_and_Tricks
>>>>
>>>> One other bit to add, I've been able to stream 1 RX channel @ 200 MS/s
>>>> straight to disk using a Intel 750 Series PCIe SSD until it was full (circa
>>>> UHD 3.10.x). To do that, I had to use a sc16 host side data format and also
>>>> use a XFS file system instead of EXT4. The host was a i7-4790k @ 4.4 GHz. I
>>>> would recommend NVMe SSD drives now as they support faster rates than that
>>>> PCIe SSD.
>>>>
>>>>
>>>> Regards,
>>>> Nate Temple
>>>>
>>>> On Fri, Sep 6, 2019 at 8:37 AM Rob Kossler via USRP-users <
>>>> usrp-users@lists.ettus.com> wrote:
>>>>
>>>>> Hi,
>>>>> As part of an effort to improve capability to store incoming receive
>>>>> chain samples to files on my SSD without errors ('O' or 'D'), I decided to
>>>>> wire an X310 noc graph to include the DmaFIFO. My thought was that the
>>>>> DmaFIFO could better tolerate varying rates of sample consumption at the
>>>>> OS.
>>>>>
>>>>> Before trying this by streaming to a file on my SSD, I first ran a
>>>>> test which streamed to a RAM-based file (60 GB ram filesystem).  I used an
>>>>> X310/UBX160 with the default FPGA XG image and initiated a 2-channel
>>>>> receive at 50MS/s (using my C++ app & UHD).  To my surprise, I am getting
>>>>> frequent "timeouts" on receive, but not always at the same time.  In one
>>>>> case, the receive worked successfully for 28 secs (2 ch, 50 MS/s).  In
>>>>> other cases, it timed out immediately or after several seconds.  Note that
>>>>> I can reliably run this same test without error if I remove the DmaFIFO.
>>>>>
>>>>> The following works fine:
>>>>>   RxRadio -> DDC -> host file (in RAM file system)
>>>>>
>>>>> The following times-out at random times:
>>>>>   RxRadio -> DDC -> DmaFIFO -> host file (in RAM file system)
>>>>>
>>>>> What could be the cause?  Is there any reason the DmaFIFO shouldn't
>>>>> work in the receive chain?
>>>>>
>>>>> Rob
>>>>> _______________________________________________
>>>>> USRP-users mailing list
>>>>> USRP-users@lists.ettus.com
>>>>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>>>>
>>>> _______________________________________________
>> USRP-users mailing list
>> USRP-users@lists.ettus.com
>> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>>
>
_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to