Richard,

I believe what you're looking for is the depth of the command queue. For
the X310, this FIFO has a depth of 16. Any command that you issue to the
X310 that has a command time set will be held in this FIFO until the
radio's time matches the command time, at which point the command is
executed.

You should definitely keep track of the state of the command queue -
overflowing this FIFO will put the radio in a bad state, usually requiring
a restart.

https://files.ettus.com/manual/classuhd_1_1usrp_1_1multi__usrp.html#a191b78b00d051d3d51c2f719361c1fb5

https://files.ettus.com/manual/classuhd_1_1time__spec__t.html

Sam Reiter

On Tue, Jan 14, 2020 at 2:01 PM Richard Joseph Muri via USRP-users <
usrp-users@lists.ettus.com> wrote:

> Hello,
>
>
> I'm using an X310 with a number of scheduled receives. I suspect there is
> a FIFO on the USRP that holds the stream_cmd_t until it is time to collect
> the requested samples. I have not been able to find documentation about the
> size of this FIFO. Could anybody point me in the proper direction?
>
>
> Do I need to keep track of the number of stream_cmds in the FIFO? Is there
> some kind of acknowledge I can use to know the stream_cmd has left the
> FIFO? At the moment I am running an C++ application with two threads, one
> to issue_stream_cmd(), and one to recv(). I loop each operation, posting a
> semaphore after each recv() and waiting on a semaphore before every
> issue_stream_cmd().
>
>
> I found this thread about using set_start_time():
>
>
> http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2016-July/049022.html
>
>
> Are these commands on the same FIFO issue_stream_cmd() uses?
>
>
> Thank you!
> Richard Muri
>
>
> _______________________________________________
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to