Hello,

I have a custom C++ app that takes in an input stream, modifies it, and outputs 
the modified version.  I am trying to minimize the latency.  To achieve fixed 
latency, I set the start time on the RX and TX streamers (tx_time = rx_time + 
latency).  I then change latency until I start getting underflows.

The system is a i9-13900K running at 5.4 GHz.  I am using DPDK with all 
relevant kernel parameters ( iommu=pt intel_iommu=on hugepages=1024 
isolcpus=1-7 nohz_full=1-7 rcu_nocbs=1-7 intel_idle.max_cstate=0).  DPDK gets 
two dedicated cores.  I have a core dedicated to taking data from the 
multi_usrp, another core for processing, another to push data to multi_usrp.

 iommu=pt intel_iommu=on hugepages=1024 isolcpus=1-7 nohz_full=1-7 
rcu_nocbs=1-7 intel_idle.max_cstate=0

I can consistently get no underflows operating at 100 Msps with 1ms latency.  
However, going below 1ms causes underflows (a few at 500 us, more at 100 us).

Are there any ideas I have not considered?

Thanks.



________________________

Eugene Grayver, Ph.D.
Aerospace Corp., Principal Engineer
Tel: 310.336.1274
________________________
_______________________________________________
USRP-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to