Re: [USRP-users] X310 with dual TwinRX set up

2021-03-10 Thread Marcus D. Leech via USRP-users

On 03/10/2021 06:15 PM, Rob Kossler via USRP-users wrote:

Hi Oliver,
I don't have any example code to provide (and I don't use gnuradio), 
but I can address a couple of things:
- the first step is to get all four channels recognized (as you 
indicated); perhaps using subdev spec "A:0 A:1 B:0 B:1"
- synchronizing in time is definitely possible. From gnuradio, I 
thought that it was the default for multi-channel operation.  You 
might have to lookup a set_start_time or similar command. check the 
uhd gnuradio documentation for usrp source 
.
- four channels at 100 MS/s is also achievable.  To use dual 10Gbe, 
you need to specify the "second_addr" device arg as indicated here 
.

Rob

Indeed, sample-level time synchronization among channels happens 
automatically for multi_usrp streams.


The suggested sub-dev string should work just fine as well.

In the uhd-usrp source in the "RF Options" you can set the tuned 
frequency for each channel.


Specify a "Num Channels" of 4 and a "num_mboards" of 1.


___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] X310 with dual TwinRX set up

2021-03-10 Thread Rob Kossler via USRP-users
Hi Oliver,
I don't have any example code to provide (and I don't use gnuradio), but I
can address a couple of things:
- the first step is to get all four channels recognized (as you indicated);
perhaps using subdev spec "A:0 A:1 B:0 B:1"
- synchronizing in time is definitely possible. From gnuradio, I thought
that it was the default for multi-channel operation.  You might have to
lookup a set_start_time or similar command. check the uhd gnuradio
documentation for usrp source
.
- four channels at 100 MS/s is also achievable.  To use dual 10Gbe, you
need to specify the "second_addr" device arg as indicated here
.
Rob

On Wed, Mar 10, 2021 at 1:10 PM Oliver Towlson via USRP-users <
usrp-users@lists.ettus.com> wrote:

> Hi
>
>
>
> I am trying to set up an X310 with 2 TwinRX boards such that:
>
>
>
> - each RF channel may be tuned to a different GNSS L-band frequency
>
> - all four RF channels may be synchronised in time
>
> - data streaming on all four channels at 100 MS/s (we are using dual 10G
> Ethernet for this)
>
>
>
> I’m pretty much a beginner when it comes to USRPs. I am using GNU radio to
> configure the USRP but so far it only recognizes two input channels. We
> found the code posted here -
> http://ettus.80997.x6.nabble.com/USRP-users-Example-code-for-a-pair-of-TwinRXs-td2673.html
> - useful but on closer inspection all four channels were set to the same
> frequency and it looks to be doing something different to what we want (it
> looks like it was written specifically to synchronise four channels
> receiving the same signal so that you can calibrate the internal phase
> offset of the USRP)
>
>
>
> Does anyone have any example code they might be willing to share, if only
> to get us started, to get our desired set-up?
>
>
>
> Thanks
>
>
>
> Oliver T
>
> P Please consider the environment before printing this e-mail.
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


[USRP-users] RFNoC 4 metadata

2021-03-10 Thread Rob Kossler via USRP-users
Hi,
I just modified one of my RFNoC blocks to use metadata for the first time.
The following remarks identify issues I encountered along the way and some
suggestions that would make things a bit easier.

   - You can't use axis_data (with sideband signals) if you want access to
   metadata. While I realize that Ettus has indicated that metadata is an
   "advanced" use case and that the axis_data interface is a "simple"
   interface model, it still seems like it wouldn't be that difficult to
   expand the axis_data model to accommodate some metadata capability
   - Along the same lines, any block that uses axis_data will discard any
   metadata from an upstream block.  This is probably the bigger issue.  For
   example, if the rx radio were to insert a metadata word, it would be
   discarded by the DDC since the DDC uses the axis_data model
   - There is no structure to the metadata.  I fully understand that this
   is by intent.  However, I did start to wonder if some skeleton structure
   would make sense. For example, maybe some bits of the metadata should be
   designated for the NOC_ID of the block that inserts it.  Or, instead, maybe
   some bits could hold a unique WORD_ID that identifies the type of metadata
   word.  Ettus could reserve some IDs for itself to allow for future use of
   metadata by Ettus blocks.
   - It would be nice if it was easier for a block to just "insert" a
   metadata word.  With my own limited FPGA skills, I just decided to ignore
   any upstream metadata words and create 1 metadata word that gets sent to
   downstream blocks.  But, that's not very friendly to the upstream blocks.
   If it were easier to do, I would have preferred to just add my block's
   metadata word to the incoming metadata words.
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] x300 latency over 10GigE

2021-03-10 Thread Doug Blackburn via USRP-users
A quick update ...

I added
#include 

to my includes and the following code to UHD_SAFE_MAIN:

=
uhd::device3::sptr usrp3 = usrp->get_device3();
uhd::rfnoc::dma_fifo_block_ctrl::sptr dmafifo_block_ctrl =
usrp3->get_block_ctrl(
uhd::rfnoc::block_id_t(0,"DmaFIFO"));

int fifoSize = 4*1024*1024;
int numChannels = usrp->get_tx_num_channels();
for (int chanIdx = 0; chanIdxget_depth(0);
// uint32_t currBaseAddr = dmafifo_block_ctrl->get_base_addr(0);
// std::cerr << "DMA chan " << chanIdx << ": base / depth : " <<
// currBaseAddr << " / " << currDepth << std::endl;
std::cerr << "Attempting to resize channel " << chanIdx <<
std::endl;
dmafifo_block_ctrl->resize( chanIdx*fifoSize, /* base address */
fifoSize, /* depth */
chanIdx );
}
=

I started with 16MB, then 8MB, etc ...

At 4MB, latency is 1/8 of what I see at 32MB as expected ... about 21.33
ms.  I'm sure I'll need to tune this a little more once I apply it to my
application, but I can now control it.

I greatly appreciate the help, Brian!

Best,
Doug


On Wed, Mar 10, 2021 at 2:46 PM Doug Blackburn  wrote:

> Brian --
>
> Thanks so much!   I sprinkled my comments in below :
>
> On Wed, Mar 10, 2021 at 1:42 PM Brian Padalino 
> wrote:
>
>> On Wed, Mar 10, 2021 at 12:39 PM Doug Blackburn 
>> wrote:
>>
>>> Brian,
>>>
>>> I've seen this using UHD-3.14 and UHD-3.15.LTS.
>>>
>>
>> The DMA FIFO block default size is set here in the source code for
>> UHD-3.15.LTS:
>>
>>
>> https://github.com/EttusResearch/uhd/blob/UHD-3.15.LTS/host/lib/rfnoc/dma_fifo_block_ctrl_impl.cpp#L25
>>
>> And the interface in the header file provides a way to resize it:
>>
>>
>> https://github.com/EttusResearch/uhd/blob/UHD-3.15.LTS/host/include/uhd/rfnoc/dma_fifo_block_ctrl.hpp#L33
>>
>> I'd probably resize it before sending any data to it.
>>
>> That should help with your latency question I think.
>>
>
> This is super helpful.  I'll give it a shot and see what happens!
>
>
>>
>>
>>>
>>> I have performed some follow-on testing that raises more questions,
>>> particularly about the usage of end_of_burst and start_of_burst.  I talk
>>> through my tests and observations below; the questions that these generated
>>> are at the end ...
>>>
>>> I thought it would be interesting to modify benchmark_rate.cpp to
>>> attempt to place a timestamp on each buffer that was sent out to see if I
>>> could observe the same behavior.  I haven't seen thorough explanations of
>>> what exactly the start_of_burst and end_of_burst metadata fields do at a
>>> low level beyond this posting --
>>> http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2016-November/050555.html
>>> and a note about start_of_burst resetting the CORDICs (I'd appreciate being
>>> pointed in the right direction if I've missed it, thank you!) --  so I
>>> wanted to test the effect on timing when has_time_spec is true and the SOB
>>> and EOB fields are either false or true.  I initially set my test up in the
>>> following way (I hope the pseudocode makes sense) to make observations
>>> easy.  I watched for the LO on a spectrum analyzer.  Per the code below, I
>>> would expect a burst every 2 seconds if the time_spec was followed ...
>>>
>>> ==
>>> max_samps_per_packet = 50e5; // 100ms at 50 MSPS
>>> start_of_burst = 
>>> end_of_burst = 
>>> has_time_spec = true;
>>> while( not burst_timer_elapsed)
>>> {
>>> tx_stream->send();
>>> start_of_burst = 
>>> end_of_burst = 
>>> time_spec = time_spec + 2.0;
>>>  }
>>>
>>
>> A few things.  I'd expect a burst every 2 seconds if you set sob = true,
>> eob = true outside the loop, and never change it and only change the
>> time_spec for every send.  Does that not work for you?
>>
>>
> Yes -- that does work, too.  I tried all the different combinations ... So
> for example, if sob/eob were true/true outside the loop and false/false
> inside the loop, I'd see a two second pause after the first burst and then
> we'd roll through the rest of them contiguously.
>
>
>> Next, The sizing of packets can be really important here.  The way the
>> DUC works is a little unintuitive.  The DUC works by creating N packets
>> from 1 input packet.  To this end, if you have an extra 1 sample, it will
>> repeat that small 1 sample packet N times - very processing inefficient.
>>
>> Furthermore, when I tried doing this I would run into weird edge cases
>> with the DMA FIFO where the send() call would block indefinitely.  My
>> workaround was to manually zero stuff and keep the transmit FIFO constantly
>> going - not using any eob flags at all.  My system would actually use a
>> software FIFO for bursts that wanted to go out, and I had a software thread
>> in a tight loop that would check if the FIFO had anything in it.  If it
>> didn't, it would zero stuff some small amount of transmit samples (1 packet

Re: [USRP-users] X300/X310: how to control an external TX/RX switch for 60GHz mm-wave transceiver?

2021-03-10 Thread Marcus D Leech via USRP-users
You’ll need to familiarize yourself with this

https://files.ettus.com/manual/page_gpio_api.html

You should be able to tie a GPIO pin to the ATR state machine in the FPGA to 
drive a GPIO output signal. 



Sent from my iPhone

> On Mar 10, 2021, at 2:23 PM, SungWon Chung via USRP-users 
>  wrote:
> 
> 
> Hello,
> 
> I'm working to use X300/X310 as a front-end of a custom built 60GHz mm-wave 
> transceiver, which needs a digital signal for its TX/RX switch to share a 
> horn antenna.
> 
> What do you think is the best solution?  
> 
> Any methods are welcome as long as it's a robust solution. Your thoughts will 
> be much appreciated.
> 
> Thanks,
> sungwon
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] x300 latency over 10GigE

2021-03-10 Thread Doug Blackburn via USRP-users
Brian --

Thanks so much!   I sprinkled my comments in below :

On Wed, Mar 10, 2021 at 1:42 PM Brian Padalino  wrote:

> On Wed, Mar 10, 2021 at 12:39 PM Doug Blackburn  wrote:
>
>> Brian,
>>
>> I've seen this using UHD-3.14 and UHD-3.15.LTS.
>>
>
> The DMA FIFO block default size is set here in the source code for
> UHD-3.15.LTS:
>
>
> https://github.com/EttusResearch/uhd/blob/UHD-3.15.LTS/host/lib/rfnoc/dma_fifo_block_ctrl_impl.cpp#L25
>
> And the interface in the header file provides a way to resize it:
>
>
> https://github.com/EttusResearch/uhd/blob/UHD-3.15.LTS/host/include/uhd/rfnoc/dma_fifo_block_ctrl.hpp#L33
>
> I'd probably resize it before sending any data to it.
>
> That should help with your latency question I think.
>

This is super helpful.  I'll give it a shot and see what happens!


>
>
>>
>> I have performed some follow-on testing that raises more questions,
>> particularly about the usage of end_of_burst and start_of_burst.  I talk
>> through my tests and observations below; the questions that these generated
>> are at the end ...
>>
>> I thought it would be interesting to modify benchmark_rate.cpp to attempt
>> to place a timestamp on each buffer that was sent out to see if I could
>> observe the same behavior.  I haven't seen thorough explanations of what
>> exactly the start_of_burst and end_of_burst metadata fields do at a low
>> level beyond this posting --
>> http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2016-November/050555.html
>> and a note about start_of_burst resetting the CORDICs (I'd appreciate being
>> pointed in the right direction if I've missed it, thank you!) --  so I
>> wanted to test the effect on timing when has_time_spec is true and the SOB
>> and EOB fields are either false or true.  I initially set my test up in the
>> following way (I hope the pseudocode makes sense) to make observations
>> easy.  I watched for the LO on a spectrum analyzer.  Per the code below, I
>> would expect a burst every 2 seconds if the time_spec was followed ...
>>
>> ==
>> max_samps_per_packet = 50e5; // 100ms at 50 MSPS
>> start_of_burst = 
>> end_of_burst = 
>> has_time_spec = true;
>> while( not burst_timer_elapsed)
>> {
>> tx_stream->send();
>> start_of_burst = 
>> end_of_burst = 
>> time_spec = time_spec + 2.0;
>>  }
>>
>
> A few things.  I'd expect a burst every 2 seconds if you set sob = true,
> eob = true outside the loop, and never change it and only change the
> time_spec for every send.  Does that not work for you?
>
>
Yes -- that does work, too.  I tried all the different combinations ... So
for example, if sob/eob were true/true outside the loop and false/false
inside the loop, I'd see a two second pause after the first burst and then
we'd roll through the rest of them contiguously.


> Next, The sizing of packets can be really important here.  The way the DUC
> works is a little unintuitive.  The DUC works by creating N packets from 1
> input packet.  To this end, if you have an extra 1 sample, it will repeat
> that small 1 sample packet N times - very processing inefficient.
>
> Furthermore, when I tried doing this I would run into weird edge cases
> with the DMA FIFO where the send() call would block indefinitely.  My
> workaround was to manually zero stuff and keep the transmit FIFO constantly
> going - not using any eob flags at all.  My system would actually use a
> software FIFO for bursts that wanted to go out, and I had a software thread
> in a tight loop that would check if the FIFO had anything in it.  If it
> didn't, it would zero stuff some small amount of transmit samples (1 packet
> I believe).  If it did, it would send the burst.  You may want to do
> something similar even with a synchronized system and counting outgoing
> samples.
>

:) This is what led me here; the application I was working on essentially
did that.  I'd have some data I'd want to send at a specific time.  I'd
translate that time to a number of buffers past the start of my transmit
(with some extra bookkeeping and buffer magic to align samples, etc), and
found that I could only get this to work if my requested transmit time was
at least 167 ms in the future.   This didn't quite reconcile with the 1ms
of latency I could demonstrate with 'latency_test'  -- which uses a single
packet -- hence my trip down the rabbit hole.  If I can lower that number a
little by modifying the FIFO block, I think I'll be happy, but ...


>
>
>>
>> My observations were as follows: if end_of_burst for the prior burst was
>> set to true, my code adhered to the time_spec.  The value of start_of_burst
>> had no effect on whether or not the expected timing was followed.  If
>> end_of_burst was set to false, the time_spec for the following burst is
>> ignored and the packet is transmitted as soon as possible.
>>
>> I then followed this up with another test -- I replaced
>>   time_spec = time_spec + 2.0;
>> with the equivalent of
>>   time_spec = time_spec + 0.100;
>>
>> And 

[USRP-users] X300/X310: how to control an external TX/RX switch for 60GHz mm-wave transceiver?

2021-03-10 Thread SungWon Chung via USRP-users
Hello,

I'm working to use X300/X310 as a front-end of a custom built 60GHz mm-wave
transceiver, which needs a digital signal for its TX/RX switch to share a
horn antenna.

What do you think is the best solution?

Any methods are welcome as long as it's a robust solution. Your thoughts
will be much appreciated.

Thanks,
sungwon
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] x300 latency over 10GigE

2021-03-10 Thread Brian Padalino via USRP-users
On Wed, Mar 10, 2021 at 12:39 PM Doug Blackburn  wrote:

> Brian,
>
> I've seen this using UHD-3.14 and UHD-3.15.LTS.
>

The DMA FIFO block default size is set here in the source code for
UHD-3.15.LTS:


https://github.com/EttusResearch/uhd/blob/UHD-3.15.LTS/host/lib/rfnoc/dma_fifo_block_ctrl_impl.cpp#L25

And the interface in the header file provides a way to resize it:


https://github.com/EttusResearch/uhd/blob/UHD-3.15.LTS/host/include/uhd/rfnoc/dma_fifo_block_ctrl.hpp#L33

I'd probably resize it before sending any data to it.

That should help with your latency question I think.


>
> I have performed some follow-on testing that raises more questions,
> particularly about the usage of end_of_burst and start_of_burst.  I talk
> through my tests and observations below; the questions that these generated
> are at the end ...
>
> I thought it would be interesting to modify benchmark_rate.cpp to attempt
> to place a timestamp on each buffer that was sent out to see if I could
> observe the same behavior.  I haven't seen thorough explanations of what
> exactly the start_of_burst and end_of_burst metadata fields do at a low
> level beyond this posting --
> http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2016-November/050555.html
> and a note about start_of_burst resetting the CORDICs (I'd appreciate being
> pointed in the right direction if I've missed it, thank you!) --  so I
> wanted to test the effect on timing when has_time_spec is true and the SOB
> and EOB fields are either false or true.  I initially set my test up in the
> following way (I hope the pseudocode makes sense) to make observations
> easy.  I watched for the LO on a spectrum analyzer.  Per the code below, I
> would expect a burst every 2 seconds if the time_spec was followed ...
>
> ==
> max_samps_per_packet = 50e5; // 100ms at 50 MSPS
> start_of_burst = 
> end_of_burst = 
> has_time_spec = true;
> while( not burst_timer_elapsed)
> {
> tx_stream->send();
> start_of_burst = 
> end_of_burst = 
> time_spec = time_spec + 2.0;
>  }
>

A few things.  I'd expect a burst every 2 seconds if you set sob = true,
eob = true outside the loop, and never change it and only change the
time_spec for every send.  Does that not work for you?

Next, The sizing of packets can be really important here.  The way the DUC
works is a little unintuitive.  The DUC works by creating N packets from 1
input packet.  To this end, if you have an extra 1 sample, it will repeat
that small 1 sample packet N times - very processing inefficient.

Furthermore, when I tried doing this I would run into weird edge cases with
the DMA FIFO where the send() call would block indefinitely.  My workaround
was to manually zero stuff and keep the transmit FIFO constantly going -
not using any eob flags at all.  My system would actually use a software
FIFO for bursts that wanted to go out, and I had a software thread in a
tight loop that would check if the FIFO had anything in it.  If it didn't,
it would zero stuff some small amount of transmit samples (1 packet I
believe).  If it did, it would send the burst.  You may want to do
something similar even with a synchronized system and counting outgoing
samples.


>
> My observations were as follows: if end_of_burst for the prior burst was
> set to true, my code adhered to the time_spec.  The value of start_of_burst
> had no effect on whether or not the expected timing was followed.  If
> end_of_burst was set to false, the time_spec for the following burst is
> ignored and the packet is transmitted as soon as possible.
>
> I then followed this up with another test -- I replaced
>   time_spec = time_spec + 2.0;
> with the equivalent of
>   time_spec = time_spec + 0.100;
>
> And set end_of_burst and start_of_burst to true.
>
> I figured if I can run this continuously by setting has_time_spec to
> 'false' after the first burst and easily push data into the FIFO buffer,
> that doing this should not be a problem ... but I'm presented with a stream
> of lates and no actual transmission.
>
> I understand that 100ms is not an integer multiple of packet size returned
> by get_max_num_samps() -- so I tried an integer multiple of the packet
> size, too, with an appropriately updated time_spec. This also resulted with
> a lates through the entire transmit.
>
> So  here are my additional questions:
>
> Is the only effect of "start_of_burst = true" to cause the CORDICs to
> reset?
> What is end_of_burst doing to enable a following time_spec to be used?
> What additional work is being performed when I set end_of_burst and
> has_time_spec to 'true' such that I get latest throughout the entire
> attempted transmission?
>

I don't know the answer to these questions.  Try the suggestions above and
see if they help you out or not.

Good luck!

Brian

>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


[USRP-users] X310 with dual TwinRX set up

2021-03-10 Thread Oliver Towlson via USRP-users
Hi

I am trying to set up an X310 with 2 TwinRX boards such that:

- each RF channel may be tuned to a different GNSS L-band frequency
- all four RF channels may be synchronised in time
- data streaming on all four channels at 100 MS/s (we are using dual 10G 
Ethernet for this)

I'm pretty much a beginner when it comes to USRPs. I am using GNU radio to 
configure the USRP but so far it only recognizes two input channels. We found 
the code posted here - 
http://ettus.80997.x6.nabble.com/USRP-users-Example-code-for-a-pair-of-TwinRXs-td2673.html
 - useful but on closer inspection all four channels were set to the same 
frequency and it looks to be doing something different to what we want (it 
looks like it was written specifically to synchronise four channels receiving 
the same signal so that you can calibrate the internal phase offset of the USRP)

Does anyone have any example code they might be willing to share, if only to 
get us started, to get our desired set-up?

Thanks

Oliver T

P Please consider the environment before printing this e-mail.
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] x300 latency over 10GigE

2021-03-10 Thread Doug Blackburn via USRP-users
Brian,

I've seen this using UHD-3.14 and UHD-3.15.LTS.

I have performed some follow-on testing that raises more questions,
particularly about the usage of end_of_burst and start_of_burst.  I talk
through my tests and observations below; the questions that these generated
are at the end ...

I thought it would be interesting to modify benchmark_rate.cpp to attempt
to place a timestamp on each buffer that was sent out to see if I could
observe the same behavior.  I haven't seen thorough explanations of what
exactly the start_of_burst and end_of_burst metadata fields do at a low
level beyond this posting --
http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2016-November/050555.html
and a note about start_of_burst resetting the CORDICs (I'd appreciate being
pointed in the right direction if I've missed it, thank you!) --  so I
wanted to test the effect on timing when has_time_spec is true and the SOB
and EOB fields are either false or true.  I initially set my test up in the
following way (I hope the pseudocode makes sense) to make observations
easy.  I watched for the LO on a spectrum analyzer.  Per the code below, I
would expect a burst every 2 seconds if the time_spec was followed ...

==
max_samps_per_packet = 50e5; // 100ms at 50 MSPS
start_of_burst = 
end_of_burst = 
has_time_spec = true;
while( not burst_timer_elapsed)
{
tx_stream->send();
start_of_burst = 
end_of_burst = 
time_spec = time_spec + 2.0;
 }

My observations were as follows: if end_of_burst for the prior burst was
set to true, my code adhered to the time_spec.  The value of start_of_burst
had no effect on whether or not the expected timing was followed.  If
end_of_burst was set to false, the time_spec for the following burst is
ignored and the packet is transmitted as soon as possible.

I then followed this up with another test -- I replaced
  time_spec = time_spec + 2.0;
with the equivalent of
  time_spec = time_spec + 0.100;

And set end_of_burst and start_of_burst to true.

I figured if I can run this continuously by setting has_time_spec to
'false' after the first burst and easily push data into the FIFO buffer,
that doing this should not be a problem ... but I'm presented with a stream
of lates and no actual transmission.

I understand that 100ms is not an integer multiple of packet size returned
by get_max_num_samps() -- so I tried an integer multiple of the packet
size, too, with an appropriately updated time_spec. This also resulted with
a lates through the entire transmit.

So  here are my additional questions:

Is the only effect of "start_of_burst = true" to cause the CORDICs to
reset?
What is end_of_burst doing to enable a following time_spec to be used?
What additional work is being performed when I set end_of_burst and
has_time_spec to 'true' such that I get latest throughout the entire
attempted transmission?

Best Regards,
Doug






On Tue, Mar 9, 2021 at 11:51 PM Brian Padalino  wrote:

> On Tue, Mar 9, 2021 at 10:03 PM Doug Blackburn via USRP-users <
> usrp-users@lists.ettus.com> wrote:
>
>> Hello --
>>
>> I've got some questions re: latency with the x300 over the 10GigE
>> interface.
>>
>> If I use the latency_test example operating at a rate of 50 MSPS, I have
>> no issues with a latency of 1ms.  The latency test receives data, examines
>> the time stamp, and transmits a single packet.
>>
>> I have an application where I'd like to run the transmitter continuously,
>> and I got curious about the latency involved in that operation.  My
>> application is similar to the benchmark_rate example.  I added the
>> following lines to the benchmark_rate example at line 256 after the line.
>>
>> md.has_time_spec = false;
>>
>> 
>> if ( (num_tx_samps % 5000) < 4*max_samps_per_packet )
>> {
>> uhd::time_spec_t expectedTime = startTime + (double) ( num_tx_samps  )
>>   / (double)usrp->get_tx_rate();
>> uhd::time_spec_t timeAtLog = usrp->get_time_now();
>> timeAtLog = usrp->get_time_now();
>> std::cerr << " Actual time " << std::endl;
>> std::cerr << " " << timeAtLog.get_full_secs() << " / "
>>   << timeAtLog.get_frac_secs() << std::endl;
>> std::cerr << " Expected time " << std::endl;
>> std::cerr << " " << expectedTime.get_full_secs() << " / "
>>   << expectedTime.get_frac_secs() << std::endl;
>> }
>> 
>>
>> The intent of this insertion is to log the time at which we return from
>> tx_stream->send() and the time at which the first sample of that sent data
>> should be transmitted -- at approximately once per second when running at
>> 50 MSPS.
>>
>> After the first second, I consistently saw the following results:
>>
>>  Actual time 
>>  1 / 0.10517
>>  Expected time 
>>  1 / 0.27253
>>
>>  Actual time 
>>  1 / 0.105419
>>  Expected time 
>>  1 / 0.27255
>>
>> Which indicates to me that there is a latency of 

Re: [USRP-users] Enable AGC in USRP E320 with RFNoC using GNURadio

2021-03-10 Thread Maria Muñoz via USRP-users
Hi Jules,

Thank you, I will try it and let you know as soon as possible.

By the way, I have checked the python generated using the UHD USRP SOURCE
block (instead of the RFNoC radio block) with AGC active and it generates
the set of AGC fine. So, as you said, it is fixed in gr-uhd but it might
still be a bug in gr-ettus.

Thanks again for the help!

Kind Regards,

Maria

El mié, 10 mar 2021 a las 11:25, Julian Arnold ()
escribió:

> Maria,
>
> >> So, if I understand correctly, I have to put there also something like
> >> "self.ettus_rfnoc_rx_radio_0.set_rx_agc(enable,0)" isn't it?
>
> Exactly! Take a look at [1] for the correct syntax.
>
> [1]
>
> https://github.com/EttusResearch/gr-ettus/blob/1038c4ce5135a2803b53554fc4971fe3de747d9a/include/ettus/rfnoc_rx_radio.h#L97
>
> Let me know if that worked out for you.
>
> Cheers,
> Julian
>
>
> On 3/10/21 9:59 AM, Maria Muñoz wrote:
> > Hi Julian,
> >
> > Thanks for the quick answer.
> >
> > I think you might be right about the possible bug turning on the AGC
> > from GRC. I have checked the flow graph generated and there's no
> > set_rx_agc enable option (I checked the c++ definition block where this
> > option did appear but I hadn't look at the python generated).
> >
> > The lines related to the radio in my flowgraph are these:
> >
> > /self.ettus_rfnoc_rx_radio_0 = ettus.rfnoc_rx_radio(
> >  self.rfnoc_graph,
> >  uhd.device_addr(''),
> >  -1,
> >  -1)
> >  self.ettus_rfnoc_rx_radio_0.set_rate(samp_rate)
> >  self.ettus_rfnoc_rx_radio_0.set_antenna('RX2', 0)
> >  self.ettus_rfnoc_rx_radio_0.set_frequency(cf, 0)
> >  self.ettus_rfnoc_rx_radio_0.set_gain(gain, 0)
> >  self.ettus_rfnoc_rx_radio_0.set_bandwidth(samp_rate, 0)
> >  self.ettus_rfnoc_rx_radio_0.set_dc_offset(True, 0)
> >  self.ettus_rfnoc_rx_radio_0.set_iq_balance(True, 0)/
> >
> > So, if I understand correctly, I have to put there also something like
> > "self.ettus_rfnoc_rx_radio_0.set_rx_agc(enable,0)" isn't it?
> >
> > Kind Regards,
> >
> > Maria
> >
> > El mié, 10 mar 2021 a las 9:16, Julian Arnold ( > >) escribió:
> >
> > Maria,
> >
> > I might not be the right person to answer this, as my experience with
> > UHD 4.0 is relatively limited at the moment.
> >
> > However, I cant tell you that the AGC on B2x0 devices is controlled
> via
> > software (using set_rx_agc()). There is no need to directly modify
> the
> > state of any pins of the FPGA.
> >
> > I vaguely remember that there was a bug in an earlier version of
> gr-uhd
> > (somewhere in 3.7) that made it difficult to turn on the AGC using
> GRC.
> > That particular one is fixed in gr-uhd. Not sure about gr-ettus,
> though.
> >
> > Maybe try using set_rx_agc() manually in you flow-graph (*.py) and
> see
> > if that helps.
> >
> > Cheers,
> > Julian
> >
> > On 3/9/21 5:11 PM, Maria Muñoz via USRP-users wrote:
> >  > Hi all,
> >  >
> >  > I was wondering if it is possible to enable AGC from the RFNoC
> radio
> >  > block in GNURadio. I use UHD 4.0 version and GNURadio 3.8 with
> > gr-ettus.
> >  >
> >  > I see that the RFNoC Rx radio block has an enable/disable/default
> > AGC
> >  > option in the GNURadio block which I assume calls the UHD function
> >  > "set_rx_agc"
> >  >
> > (
> https://files.ettus.com/manual/classuhd_1_1usrp_1_1multi__usrp.html#abdab1f6c3775a9071b15c9805f866486
> )
> >  >
> >  > I have also checked on the FPGA side that there is a pin from
> > FPGA to
> >  > AD9361 called XCVR_ENA_AGC which is set always to 1 on the top
> > level of
> >  > the FPGA image (see attached file "e320.v", line 872). This pin,
> >  > according to
> >  >
> >
> https://www.analog.com/media/en/technical-documentation/data-sheets/AD9361.pdf
> >
> >  > is the "Manual Control Input for Automatic Gain Control (AGC)".
> >  > Must be this pin set to 0 to have AGC working?
> >  > If not, how can I get AGC working? I've made some tests
> >  > enabling/disabling this option but I do not see any changes
> > between the
> >  > waveforms received.
> >  >
> >  > Any help would be appreciated.
> >  >
> >  > Kind Regards,
> >  >
> >  > Maria
> >  >
> >  > ___
> >  > USRP-users mailing list
> >  > USRP-users@lists.ettus.com 
> >  >
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
> >  >
> >
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] Enable AGC in USRP E320 with RFNoC using GNURadio

2021-03-10 Thread Julian Arnold via USRP-users

Maria,

So, if I understand correctly, I have to put there also something like 
"self.ettus_rfnoc_rx_radio_0.set_rx_agc(enable,0)" isn't it?


Exactly! Take a look at [1] for the correct syntax.

[1] 
https://github.com/EttusResearch/gr-ettus/blob/1038c4ce5135a2803b53554fc4971fe3de747d9a/include/ettus/rfnoc_rx_radio.h#L97


Let me know if that worked out for you.

Cheers,
Julian


On 3/10/21 9:59 AM, Maria Muñoz wrote:

Hi Julian,

Thanks for the quick answer.

I think you might be right about the possible bug turning on the AGC 
from GRC. I have checked the flow graph generated and there's no 
set_rx_agc enable option (I checked the c++ definition block where this 
option did appear but I hadn't look at the python generated).


The lines related to the radio in my flowgraph are these:

/self.ettus_rfnoc_rx_radio_0 = ettus.rfnoc_rx_radio(
             self.rfnoc_graph,
             uhd.device_addr(''),
             -1,
             -1)
         self.ettus_rfnoc_rx_radio_0.set_rate(samp_rate)
         self.ettus_rfnoc_rx_radio_0.set_antenna('RX2', 0)
         self.ettus_rfnoc_rx_radio_0.set_frequency(cf, 0)
         self.ettus_rfnoc_rx_radio_0.set_gain(gain, 0)
         self.ettus_rfnoc_rx_radio_0.set_bandwidth(samp_rate, 0)
         self.ettus_rfnoc_rx_radio_0.set_dc_offset(True, 0)
         self.ettus_rfnoc_rx_radio_0.set_iq_balance(True, 0)/

So, if I understand correctly, I have to put there also something like 
"self.ettus_rfnoc_rx_radio_0.set_rx_agc(enable,0)" isn't it?


Kind Regards,

Maria

El mié, 10 mar 2021 a las 9:16, Julian Arnold (>) escribió:


Maria,

I might not be the right person to answer this, as my experience with
UHD 4.0 is relatively limited at the moment.

However, I cant tell you that the AGC on B2x0 devices is controlled via
software (using set_rx_agc()). There is no need to directly modify the
state of any pins of the FPGA.

I vaguely remember that there was a bug in an earlier version of gr-uhd
(somewhere in 3.7) that made it difficult to turn on the AGC using GRC.
That particular one is fixed in gr-uhd. Not sure about gr-ettus, though.

Maybe try using set_rx_agc() manually in you flow-graph (*.py) and see
if that helps.

Cheers,
Julian

On 3/9/21 5:11 PM, Maria Muñoz via USRP-users wrote:
 > Hi all,
 >
 > I was wondering if it is possible to enable AGC from the RFNoC radio
 > block in GNURadio. I use UHD 4.0 version and GNURadio 3.8 with
gr-ettus.
 >
 > I see that the RFNoC Rx radio block has an enable/disable/default
AGC
 > option in the GNURadio block which I assume calls the UHD function
 > "set_rx_agc"
 >

(https://files.ettus.com/manual/classuhd_1_1usrp_1_1multi__usrp.html#abdab1f6c3775a9071b15c9805f866486)
 >
 > I have also checked on the FPGA side that there is a pin from
FPGA to
 > AD9361 called XCVR_ENA_AGC which is set always to 1 on the top
level of
 > the FPGA image (see attached file "e320.v", line 872). This pin,
 > according to
 >

https://www.analog.com/media/en/technical-documentation/data-sheets/AD9361.pdf

 > is the "Manual Control Input for Automatic Gain Control (AGC)".
 > Must be this pin set to 0 to have AGC working?
 > If not, how can I get AGC working? I've made some tests
 > enabling/disabling this option but I do not see any changes
between the
 > waveforms received.
 >
 > Any help would be appreciated.
 >
 > Kind Regards,
 >
 > Maria
 >
 > ___
 > USRP-users mailing list
 > USRP-users@lists.ettus.com 
 > http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
 >



___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] Enable AGC in USRP E320 with RFNoC using GNURadio

2021-03-10 Thread Maria Muñoz via USRP-users
Hi Julian,

Thanks for the quick answer.

I think you might be right about the possible bug turning on the AGC from
GRC. I have checked the flow graph generated and there's no set_rx_agc
enable option (I checked the c++ definition block where this option did
appear but I hadn't look at the python generated).

The lines related to the radio in my flowgraph are these:












*self.ettus_rfnoc_rx_radio_0 = ettus.rfnoc_rx_radio(
self.rfnoc_graph,uhd.device_addr(''),-1,
-1)self.ettus_rfnoc_rx_radio_0.set_rate(samp_rate)
self.ettus_rfnoc_rx_radio_0.set_antenna('RX2', 0)
self.ettus_rfnoc_rx_radio_0.set_frequency(cf, 0)
self.ettus_rfnoc_rx_radio_0.set_gain(gain, 0)
self.ettus_rfnoc_rx_radio_0.set_bandwidth(samp_rate, 0)
self.ettus_rfnoc_rx_radio_0.set_dc_offset(True, 0)
self.ettus_rfnoc_rx_radio_0.set_iq_balance(True, 0)*

So, if I understand correctly, I have to put there also something like
"self.ettus_rfnoc_rx_radio_0.set_rx_agc(enable,0)" isn't it?

Kind Regards,

Maria

El mié, 10 mar 2021 a las 9:16, Julian Arnold ()
escribió:

> Maria,
>
> I might not be the right person to answer this, as my experience with
> UHD 4.0 is relatively limited at the moment.
>
> However, I cant tell you that the AGC on B2x0 devices is controlled via
> software (using set_rx_agc()). There is no need to directly modify the
> state of any pins of the FPGA.
>
> I vaguely remember that there was a bug in an earlier version of gr-uhd
> (somewhere in 3.7) that made it difficult to turn on the AGC using GRC.
> That particular one is fixed in gr-uhd. Not sure about gr-ettus, though.
>
> Maybe try using set_rx_agc() manually in you flow-graph (*.py) and see
> if that helps.
>
> Cheers,
> Julian
>
> On 3/9/21 5:11 PM, Maria Muñoz via USRP-users wrote:
> > Hi all,
> >
> > I was wondering if it is possible to enable AGC from the RFNoC radio
> > block in GNURadio. I use UHD 4.0 version and GNURadio 3.8 with gr-ettus.
> >
> > I see that the RFNoC Rx radio block has an enable/disable/default AGC
> > option in the GNURadio block which I assume calls the UHD function
> > "set_rx_agc"
> > (
> https://files.ettus.com/manual/classuhd_1_1usrp_1_1multi__usrp.html#abdab1f6c3775a9071b15c9805f866486
> )
> >
> > I have also checked on the FPGA side that there is a pin from FPGA to
> > AD9361 called XCVR_ENA_AGC which is set always to 1 on the top level of
> > the FPGA image (see attached file "e320.v", line 872). This pin,
> > according to
> >
> https://www.analog.com/media/en/technical-documentation/data-sheets/AD9361.pdf
> > is the "Manual Control Input for Automatic Gain Control (AGC)".
> > Must be this pin set to 0 to have AGC working?
> > If not, how can I get AGC working? I've made some tests
> > enabling/disabling this option but I do not see any changes between the
> > waveforms received.
> >
> > Any help would be appreciated.
> >
> > Kind Regards,
> >
> > Maria
> >
> > ___
> > USRP-users mailing list
> > USRP-users@lists.ettus.com
> > http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
> >
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] Enable AGC in USRP E320 with RFNoC using GNURadio

2021-03-10 Thread Julian Arnold via USRP-users

Maria,

I might not be the right person to answer this, as my experience with 
UHD 4.0 is relatively limited at the moment.


However, I cant tell you that the AGC on B2x0 devices is controlled via 
software (using set_rx_agc()). There is no need to directly modify the 
state of any pins of the FPGA.


I vaguely remember that there was a bug in an earlier version of gr-uhd 
(somewhere in 3.7) that made it difficult to turn on the AGC using GRC.

That particular one is fixed in gr-uhd. Not sure about gr-ettus, though.

Maybe try using set_rx_agc() manually in you flow-graph (*.py) and see 
if that helps.


Cheers,
Julian

On 3/9/21 5:11 PM, Maria Muñoz via USRP-users wrote:

Hi all,

I was wondering if it is possible to enable AGC from the RFNoC radio 
block in GNURadio. I use UHD 4.0 version and GNURadio 3.8 with gr-ettus.


I see that the RFNoC Rx radio block has an enable/disable/default AGC 
option in the GNURadio block which I assume calls the UHD function 
"set_rx_agc" 
(https://files.ettus.com/manual/classuhd_1_1usrp_1_1multi__usrp.html#abdab1f6c3775a9071b15c9805f866486)


I have also checked on the FPGA side that there is a pin from FPGA to 
AD9361 called XCVR_ENA_AGC which is set always to 1 on the top level of 
the FPGA image (see attached file "e320.v", line 872). This pin, 
according to 
https://www.analog.com/media/en/technical-documentation/data-sheets/AD9361.pdf 
is the "Manual Control Input for Automatic Gain Control (AGC)".

Must be this pin set to 0 to have AGC working?
If not, how can I get AGC working? I've made some tests 
enabling/disabling this option but I do not see any changes between the 
waveforms received.


Any help would be appreciated.

Kind Regards,

Maria

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com



___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com