Brian,

I've seen this using UHD-3.14 and UHD-3.15.LTS.

I have performed some follow-on testing that raises more questions,
particularly about the usage of end_of_burst and start_of_burst.  I talk
through my tests and observations below; the questions that these generated
are at the end ...

I thought it would be interesting to modify benchmark_rate.cpp to attempt
to place a timestamp on each buffer that was sent out to see if I could
observe the same behavior.  I haven't seen thorough explanations of what
exactly the start_of_burst and end_of_burst metadata fields do at a low
level beyond this posting --
http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2016-November/050555.html
and a note about start_of_burst resetting the CORDICs (I'd appreciate being
pointed in the right direction if I've missed it, thank you!) --  so I
wanted to test the effect on timing when has_time_spec is true and the SOB
and EOB fields are either false or true.  I initially set my test up in the
following way (I hope the pseudocode makes sense) to make observations
easy.  I watched for the LO on a spectrum analyzer.  Per the code below, I
would expect a burst every 2 seconds if the time_spec was followed ...

======
max_samps_per_packet = 50e5; // 100ms at 50 MSPS
start_of_burst = <true,false>
end_of_burst = <true,false>
has_time_spec = true;
while( not burst_timer_elapsed)
{
    tx_stream->send();
    start_of_burst = <true,false>
    end_of_burst = <true, false>
    time_spec = time_spec + 2.0;
 }

My observations were as follows: if end_of_burst for the prior burst was
set to true, my code adhered to the time_spec.  The value of start_of_burst
had no effect on whether or not the expected timing was followed.  If
end_of_burst was set to false, the time_spec for the following burst is
ignored and the packet is transmitted as soon as possible.

I then followed this up with another test -- I replaced
      time_spec = time_spec + 2.0;
with the equivalent of
      time_spec = time_spec + 0.100;

And set end_of_burst and start_of_burst to true.

I figured if I can run this continuously by setting has_time_spec to
'false' after the first burst and easily push data into the FIFO buffer,
that doing this should not be a problem ... but I'm presented with a stream
of lates and no actual transmission.

I understand that 100ms is not an integer multiple of packet size returned
by get_max_num_samps() -- so I tried an integer multiple of the packet
size, too, with an appropriately updated time_spec. This also resulted with
a lates through the entire transmit.

So .... here are my additional questions:

Is the only effect of "start_of_burst = true" to cause the CORDICs to
reset?
What is end_of_burst doing to enable a following time_spec to be used?
What additional work is being performed when I set end_of_burst and
has_time_spec to 'true' such that I get latest throughout the entire
attempted transmission?

Best Regards,
Doug






On Tue, Mar 9, 2021 at 11:51 PM Brian Padalino <bpadal...@gmail.com> wrote:

> On Tue, Mar 9, 2021 at 10:03 PM Doug Blackburn via USRP-users <
> usrp-users@lists.ettus.com> wrote:
>
>> Hello --
>>
>> I've got some questions re: latency with the x300 over the 10GigE
>> interface.
>>
>> If I use the latency_test example operating at a rate of 50 MSPS, I have
>> no issues with a latency of 1ms.  The latency test receives data, examines
>> the time stamp, and transmits a single packet.
>>
>> I have an application where I'd like to run the transmitter continuously,
>> and I got curious about the latency involved in that operation.  My
>> application is similar to the benchmark_rate example.  I added the
>> following lines to the benchmark_rate example at line 256 after the line.
>>
>> md.has_time_spec = false;
>>
>> ====
>> if ( (num_tx_samps % 50000000) < 4*max_samps_per_packet )
>> {
>>     uhd::time_spec_t expectedTime = startTime + (double) ( num_tx_samps  )
>>                       / (double)usrp->get_tx_rate();
>>     uhd::time_spec_t timeAtLog = usrp->get_time_now();
>>     timeAtLog = usrp->get_time_now();
>>     std::cerr << "==== Actual time ====" << std::endl;
>>     std::cerr << "     " << timeAtLog.get_full_secs() << " / "
>>                           << timeAtLog.get_frac_secs() << std::endl;
>>     std::cerr << "==== Expected time ====" << std::endl;
>>     std::cerr << "     " << expectedTime.get_full_secs() << " / "
>>                           << expectedTime.get_frac_secs() << std::endl;
>> }
>> ====
>>
>> The intent of this insertion is to log the time at which we return from
>> tx_stream->send() and the time at which the first sample of that sent data
>> should be transmitted -- at approximately once per second when running at
>> 50 MSPS.
>>
>> After the first second, I consistently saw the following results:
>>
>> ==== Actual time ====
>>      1 / 0.10517
>> ==== Expected time ====
>>      1 / 0.27253
>>
>> ==== Actual time ====
>>      1 / 0.105419
>> ==== Expected time ====
>>      1 / 0.27255
>>
>> Which indicates to me that there is a latency of approximately 167ms when
>> transmitting data.  That is, the send() function is returning 167ms earlier
>> than I expect the data to actually be transmitted.   If I halve the sample
>> rate to 25 MSPS, the latency doubles.
>>
>> What is the source of the latency when running in a continuous mode?
>> Initially, I had thought that this might be related to the
>> send_buffer_size, but making changes to send_buffer_size seem to not have
>> an effect.   FWIW, 167ms at 50 MSPS is suspiciously close to the value for
>> wmem_max (33554432) suggested in the x300 system configuration ... but
>> neither changing that value or send_buffer_size seems to make a difference.
>>
>> Is this latency tunable?
>>
>
> I suspect it's the DMA FIFO which uses the DRAM in the X310 as a buffer.
> The default buffer size is 32MB.
>
> Which version of UHD are you using?
>
> Brian
>
_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to