On Wed, Mar 10, 2021 at 12:39 PM Doug Blackburn <[email protected]> wrote:

> Brian,
>
> I've seen this using UHD-3.14 and UHD-3.15.LTS.
>

The DMA FIFO block default size is set here in the source code for
UHD-3.15.LTS:


https://github.com/EttusResearch/uhd/blob/UHD-3.15.LTS/host/lib/rfnoc/dma_fifo_block_ctrl_impl.cpp#L25

And the interface in the header file provides a way to resize it:


https://github.com/EttusResearch/uhd/blob/UHD-3.15.LTS/host/include/uhd/rfnoc/dma_fifo_block_ctrl.hpp#L33

I'd probably resize it before sending any data to it.

That should help with your latency question I think.


>
> I have performed some follow-on testing that raises more questions,
> particularly about the usage of end_of_burst and start_of_burst.  I talk
> through my tests and observations below; the questions that these generated
> are at the end ...
>
> I thought it would be interesting to modify benchmark_rate.cpp to attempt
> to place a timestamp on each buffer that was sent out to see if I could
> observe the same behavior.  I haven't seen thorough explanations of what
> exactly the start_of_burst and end_of_burst metadata fields do at a low
> level beyond this posting --
> http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2016-November/050555.html
> and a note about start_of_burst resetting the CORDICs (I'd appreciate being
> pointed in the right direction if I've missed it, thank you!) --  so I
> wanted to test the effect on timing when has_time_spec is true and the SOB
> and EOB fields are either false or true.  I initially set my test up in the
> following way (I hope the pseudocode makes sense) to make observations
> easy.  I watched for the LO on a spectrum analyzer.  Per the code below, I
> would expect a burst every 2 seconds if the time_spec was followed ...
>
> ======
> max_samps_per_packet = 50e5; // 100ms at 50 MSPS
> start_of_burst = <true,false>
> end_of_burst = <true,false>
> has_time_spec = true;
> while( not burst_timer_elapsed)
> {
>     tx_stream->send();
>     start_of_burst = <true,false>
>     end_of_burst = <true, false>
>     time_spec = time_spec + 2.0;
>  }
>

A few things.  I'd expect a burst every 2 seconds if you set sob = true,
eob = true outside the loop, and never change it and only change the
time_spec for every send.  Does that not work for you?

Next, The sizing of packets can be really important here.  The way the DUC
works is a little unintuitive.  The DUC works by creating N packets from 1
input packet.  To this end, if you have an extra 1 sample, it will repeat
that small 1 sample packet N times - very processing inefficient.

Furthermore, when I tried doing this I would run into weird edge cases with
the DMA FIFO where the send() call would block indefinitely.  My workaround
was to manually zero stuff and keep the transmit FIFO constantly going -
not using any eob flags at all.  My system would actually use a software
FIFO for bursts that wanted to go out, and I had a software thread in a
tight loop that would check if the FIFO had anything in it.  If it didn't,
it would zero stuff some small amount of transmit samples (1 packet I
believe).  If it did, it would send the burst.  You may want to do
something similar even with a synchronized system and counting outgoing
samples.


>
> My observations were as follows: if end_of_burst for the prior burst was
> set to true, my code adhered to the time_spec.  The value of start_of_burst
> had no effect on whether or not the expected timing was followed.  If
> end_of_burst was set to false, the time_spec for the following burst is
> ignored and the packet is transmitted as soon as possible.
>
> I then followed this up with another test -- I replaced
>       time_spec = time_spec + 2.0;
> with the equivalent of
>       time_spec = time_spec + 0.100;
>
> And set end_of_burst and start_of_burst to true.
>
> I figured if I can run this continuously by setting has_time_spec to
> 'false' after the first burst and easily push data into the FIFO buffer,
> that doing this should not be a problem ... but I'm presented with a stream
> of lates and no actual transmission.
>
> I understand that 100ms is not an integer multiple of packet size returned
> by get_max_num_samps() -- so I tried an integer multiple of the packet
> size, too, with an appropriately updated time_spec. This also resulted with
> a lates through the entire transmit.
>
> So .... here are my additional questions:
>
> Is the only effect of "start_of_burst = true" to cause the CORDICs to
> reset?
> What is end_of_burst doing to enable a following time_spec to be used?
> What additional work is being performed when I set end_of_burst and
> has_time_spec to 'true' such that I get latest throughout the entire
> attempted transmission?
>

I don't know the answer to these questions.  Try the suggestions above and
see if they help you out or not.

Good luck!

Brian

>
_______________________________________________
USRP-users mailing list
[email protected]
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to