Re: [USRP-users] B210 -- various questions on sampling/clock rates

2017-10-25 Thread Rob Heig via USRP-users
Hi,

Thanks a lot for your answer! The link you gave me is also full of very
very interesting material :)
Have a nice day!
Rob

On 25 October 2017 at 22:16, Anon Lister  wrote:

> Try specrec to record data.
>
> https://github.com/garverp/gr-analysis
>
> As to why it works better see this presentation:
> http://www.trondeau.com/grcon15-presentations#
> wednesday_Lincoln_Synchronized
> (The link is down at the time of writing, perhaps it will be up again soon)
>
> With it I am able to do 50Msample w/o overruns.
>
> Search for master_clock_rate in the uhd documentation. Setting that will
> allow you to manually set the master clock rate for the b2xx and e3xx
> devices. As to why I believe oversampling has some benefits a more dsp
> focused individual will be able to elaborate on. I believe the logic for
> automatic choice is something like the highest multiple of the desired
> sample rate that is less than the maximum clock rate.
>
> -Anon
>
> On Oct 25, 2017 12:12 PM, "Rob Heig via USRP-users" <
> usrp-users@lists.ettus.com> wrote:
>
> Hi,
>
> I am experimenting a bit with the B210 board and I have a couple of
> questions concerning sampling/clock rates:
>
> - is it normal that, on a decent modern PC with plenty of cores, memory,
> and working on a SSD, a simple program that dumps I/Q data on file gives
> more often than not overflows ("" messages from UHD) starting at around
> 40Msps (sometimes, with a very lightweight charge of the CPU, even at
> 10-20)? I have been monitoring USB traffic using vUSBAnalyzer to see if
> anything's wrong, and it seems that from time to time the transmission
> simply stops and gets reset after a few seconds (meaning that the board's
> buffers overflowed, I guess), but I couldn't find a reason why this
> happens. I have asked a colleague to increase the size of the buffers on
> the FPGA, but that improved the situation only slightly... Is there any
> architectural documentation explaining the behavior of UHD with a USB
> device to see where the bottleneck could be (without having to delve into
> the UHD code)?
>
> - playing with filters I saw that, when the sampling rate is set below
> 16Msps, the clock rate is set at a multiple of the desired sampling rate
> (for instance, 32MHz for 2Msps, 40MHz for 5Msps, ...). Actually, I realized
> it only after having spent a whole morning wondering why a custom FIR
> filter that was supposed to work nicely at 8Msps was not filtering at all
> (I guess the messages printed on the console are there for a reason
> ;)). What is the reason behind this choice? Is it imposed by the RFIC (I
> couldn't find anything on the reference manual, but I might be
> mistaken) or by the board design? What is the rule that governs the
> choice of the clock rate? Is there any documentation about it?
>
> Thanks a lot in advance and have a nice day!
> Rob
>
> ___
> USRP-users mailing list
> USRP-users@lists.ettus.com
> http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
>
>
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


[USRP-users] USRP B210 time errors with high master clock rate

2017-10-25 Thread Perelman, Nathan via USRP-users
I've seen some odd behavior with timestamps for samples from the B210 being
off by large fractions of a second from what I expected when setting the
time to GPS time. I wrote a program (see attached) that uses
get_time_last_pps() to attempt to validate that the time is being set
correctly. I discovered that when using a master clock rate of 61.44 MHz,
the time returned is sometimes off by a quarter second (see output from
running below). At a master clock rate of 16 MHz, I don't see this issue.
Testing was done with UHD 3.10.1.0. Is my program doing something wrong or
is this is an issue with the B200?

 

time_test--args type=b200,master_clock_rate=61.44e6

linux; GNU C++ version 5.2.1 20150902 (Red Hat 5.2.1-2); Boost_106000;
UHD_003.010.001.000-0-unknown

 

 

Creating the usrp device with: type=b200,master_clock_rate=30.72e6...

-- Detected Device: B210

-- Operating over USB 3.

-- Detecting internal GPSDO Found an internal GPSDO: GPSTCXO , Firmware
Rev 0.929a

-- Initialize CODEC control...

-- Initialize Radio control...

-- Performing register loopback test... pass

-- Performing register loopback test... pass

-- Performing CODEC loopback test... pass

-- Performing CODEC loopback test... pass

-- Asking for clock rate 61.44 MHz... 

-- Actually got clock rate 61.44 MHz.

-- Performing timer loopback test... pass

-- Performing timer loopback test... pass

Using Device: Single USRP:

  Device: B-Series Device

  Mboard 0: B210

  RX Channel: 0

RX DSP: 0

RX Dboard: A

RX Subdev: FE-RX2

  RX Channel: 1

RX DSP: 1

RX Dboard: A

RX Subdev: FE-RX1

  TX Channel: 0

TX DSP: 0

TX Dboard: A

TX Subdev: FE-TX2

  TX Channel: 1

TX DSP: 1

TX Dboard: A

TX Subdev: FE-TX1

 

Time set count 4 check 1/5: ERROR: Time difference at last PPS: 0.249037

Time set count 4 check 4/5: ERROR: Time difference at last PPS: 0.24

Time set count 8 check 1/5: ERROR: Time difference at last PPS: 0.24



time_test.cpp
Description: Binary data


smime.p7s
Description: S/MIME cryptographic signature
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] 2 N200 MIMO system phase offset varies with frequency, have used timed_command with tune and also integer-N Tuning per Marcus M post of Feb 17, 2016

2017-10-25 Thread John Shields via USRP-users
Thanks Marcus,
 I take your advice re: ‘calibration’ and DRAO.

 I had hoped, however, that I would not be dealing with 
> 100 degree offset in an ideal environment i.e. same signal through a good 
quality splitter positioned right at the input to the SBXs. While it was 
immediately obvious when you mentioned that the MIMO implementation makes no 
correction for the delay (not even the roughest based on the length and 
velocity factor which would not cover eveerything), it does mean (to me at 
least) that I am dealing with a larger offset than I thought from the Ettus 
documentation with all the talk about synching SBX LO etc. and, while 
mentioning there are component factors which mean that the offset was not zero, 
not highlighting there is a deliberate frequency sensitive offset built in to 
the design of the ‘'MIMO cable”.

Will ponder my next move.

  Kind Regards,

 John

From: Marcus D. Leech
Sent: Thursday, October 26, 2017 3:56 AM
To: John Shields
Cc: usrp-users
Subject: Re: [USRP-users] 2 N200 MIMO system phase offset varies with 
frequency, have used timed_command with tune and also integer-N Tuning per 
Marcus M post of Feb 17, 2016

On 10/25/2017 03:16 AM, John Shields wrote:

  Thanks very much Marcus for the thorough explanations. I looked at the phase 
change with frequency to see if there was a fixed delay and there didn’t appear 
to be but, effectively, the MIMO cable induces a frequency dependent 
uncorrected phase offset  which is eminently understandable but would appear to 
make a mockery of ‘MIMO’ claims. I realised that there will always be a phase 
offset but was disappointed by the magnitude as measured by the complex 
conjugate of both signals, a complex_to_arg block and decimating the result by 
1K and plotting on Qt GUI Time Sink.
In commercial MIMO applications, the implementation corrects for phase-offset 
error, because it is (reasonably) expected that there will always be
  some amount of phase offset.  It's inevitable for there to be *some*.   For 
example, the DRAO synthesis array uses hardware to measure the
  phase length of each of their cables, and corrects for thermal-expansion 
effects in real-time.  Since phase-offset error (and drift) extends outwards
  away from the USRP envelope, it's not realistic to expect that all such 
effects are accounted for in the hardware (at least, not without a much
  higher price-tag).



  It would appear that if I wanted to try to get as close to zero phase offset 
(to correct for non-zero MIMO cable length at least), then I need an 
Octoclock-G but I don’t have the nearly  $NZD 3000.00 it costs so I wonder if 
there is a ‘cheap’ way to convert my existing GPSDO board into an Octoclock-G? 
I only need to be able to buffer the signal for 2 USRPs.
You could just try splitting the signal two ways.   Myself, I buffer such 
signals with 74HC04 inverters, but I'm handy with a soldering iron.  There are
  cheap GPSDOs out there now, so it's just a matter of buffering, and for only 
2 units, you might be able to get away with just splitting them.




  Otherwise, I guess I could ‘calibrate’ the offset at various frequencies and 
then, at run time, apply a phase correction to one leg based on the fc? Seems a 
little inelegant.
That is *PRECISELY* what most MIMO applications do, and folks doing 
beam-forming, etc.  There will ALWAYS be some amount of phase offset--
  some of it fixed, some of it variable.   It is a *systemic* imperfection, 
which means that it has to be corrected for that account for all the systemic
  contributions, including hardware entirely outside of the USRP.




  Kind Regards,

 John

  From: Marcus D. Leech
  Sent: Wednesday, October 25, 2017 1:20 PM
  To: John Shields
  Cc: usrp-users
  Subject: Re: [USRP-users] 2 N200 MIMO system phase offset varies with 
frequency, have used timed_command with tune and also integer-N Tuning per 
Marcus M post of Feb 17, 2016

  On 10/24/2017 07:45 PM, John Shields wrote:

Thanks Marcus,
So it appears that the synching of the SBX LOs 
doesn’t work; or perhaps I should say, it doesn’t work during my measurement 
period? The integer-N tuning doesn’t work either.

I can say that, with some level of precision, the 
phase is fairly constant with center-frequency but if, for example, I had a 5 
MHz spectrum how could I ‘correct for that’? I be;ieve that there is the whole 
Hilbert transform issue when you wish to translate the phase/frequency of a 
band of signals to a different one –is that what I should use?

From my point of view, there is quite a 
misinterpretation of what ‘synchronistation’ means; in particular for SBXs and 
their LOs which, as advertised, are supposed to be capable of such operation 
with a few 

Re: [USRP-users] B210 -- various questions on sampling/clock rates

2017-10-25 Thread Anon Lister via USRP-users
Try specrec to record data.

https://github.com/garverp/gr-analysis

As to why it works better see this presentation:
http://www.trondeau.com/grcon15-presentations#wednesday_Lincoln_Synchronized
(The link is down at the time of writing, perhaps it will be up again soon)

With it I am able to do 50Msample w/o overruns.

Search for master_clock_rate in the uhd documentation. Setting that will
allow you to manually set the master clock rate for the b2xx and e3xx
devices. As to why I believe oversampling has some benefits a more dsp
focused individual will be able to elaborate on. I believe the logic for
automatic choice is something like the highest multiple of the desired
sample rate that is less than the maximum clock rate.

-Anon

On Oct 25, 2017 12:12 PM, "Rob Heig via USRP-users" <
usrp-users@lists.ettus.com> wrote:

Hi,

I am experimenting a bit with the B210 board and I have a couple of
questions concerning sampling/clock rates:

- is it normal that, on a decent modern PC with plenty of cores, memory,
and working on a SSD, a simple program that dumps I/Q data on file gives
more often than not overflows ("" messages from UHD) starting at around
40Msps (sometimes, with a very lightweight charge of the CPU, even at
10-20)? I have been monitoring USB traffic using vUSBAnalyzer to see if
anything's wrong, and it seems that from time to time the transmission
simply stops and gets reset after a few seconds (meaning that the board's
buffers overflowed, I guess), but I couldn't find a reason why this
happens. I have asked a colleague to increase the size of the buffers on
the FPGA, but that improved the situation only slightly... Is there any
architectural documentation explaining the behavior of UHD with a USB
device to see where the bottleneck could be (without having to delve into
the UHD code)?

- playing with filters I saw that, when the sampling rate is set below
16Msps, the clock rate is set at a multiple of the desired sampling rate
(for instance, 32MHz for 2Msps, 40MHz for 5Msps, ...). Actually, I realized
it only after having spent a whole morning wondering why a custom FIR
filter that was supposed to work nicely at 8Msps was not filtering at all
(I guess the messages printed on the console are there for a reason
;)). What is the reason behind this choice? Is it imposed by the RFIC (I
couldn't find anything on the reference manual, but I might be
mistaken) or by the board design? What is the rule that governs the
choice of the clock rate? Is there any documentation about it?

Thanks a lot in advance and have a nice day!
Rob

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] systemverilog files in rfnoc block

2017-10-25 Thread Dario Pennisi via USRP-users
Hi,
I pushed the fix on my fork:

https://github.com/ipTronix/fpga/commit/b144fcb40eaa0e54dfa3c66bc4fc7cb42c54362c

Dario Pennisi








On Wed, Oct 25, 2017 at 8:06 PM +0200, "Jade Anderson" 
> wrote:


Hi,
Below is a question about sytemverilog support from August, that seems 
unresolved.
I found this workaround, but why does the scripted flow not support 
systemverilog design files?
Are there plans to make this change to support designs in sytemverilog?
  If not, then can you please point me to an example of how to include a 
synthesized netlist into a build?  I can synthesize my systemverilog module in 
Vivado for the  X310 with no problems.
All I need to do is then import it into the x310 build.

Thanks
Jade Anderson


Message: 5
Date: Fri, 25 Aug 2017 19:55:11 +
From: Dario Pennisi
To: "usrp-users@lists.ettus.com"
Subject: [USRP-users] systemverilog files in rfnoc block
Message-ID: <150369090.91...@iptronix.com>
Content-Type: text/plain; charset="iso-8859-1"

hi,

i am trying to include a couple of systemverilog files in the list of sources 
for a custom rfnoc block.

if i do that i can see in the log that all files with .sv extension are ignored 
and of course their modules are not found. if i launch compilation in gui mode 
and then add files back it works...

is there any way of avoiding this manual step?

thanks,


Dario Pennisi
-- next part --
An HTML attachment was scrubbed...
URL:

--

Message: 6
Date: Fri, 25 Aug 2017 20:27:18 +
From: Dario Pennisi
To: "usrp-users@lists.ettus.com"
Subject: Re: [USRP-users] systemverilog files in rfnoc block
Message-ID: <1503692838342.55...@iptronix.com>
Content-Type: text/plain; charset="iso-8859-1"

i think i found the issue...

tcl script file usrp3/tools/scripts/viv_utils.tcl actually contains 
instructions to add files to project based on their extensions and .sv is not 
listed so files are skipped.

adding a case for .sv works but it also includes axi_crossbar_intf.sv which 
seems to be a simulation file and won't compile so i had to exclude that file...

is there any reason why sv files are not included or is that just a lazy way to 
exclude simulation sources?

unfortunately i need to use some systemverilog constructs and with .v extension 
those seem not to be accepted...

thanks,


Dario Pennisi

___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


[USRP-users] 2 B210 synchronous problem

2017-10-25 Thread Hideyuki Matsunaga via USRP-users
Hi

I bought 2 B210s for testing direction of arrival estimation, like below.

===   Configuration =

Ch0 <--> | USRP0 Rx 0 |
 || <-- USB3.0 --> |   PC|
Ch1 <--> | USRP0 Rx 1 || Ubuntu 14.04|
   | GNU Radio Companion 3.7.11.1|
Ch2 <--> | USRP1 Rx 0 || UHD_003.010.001.001-79-g7ac01c7f|
 || <-- USB3.0 --> | |
Ch3 <--> | USRP1 Rx 1 |

- External 10MHz reference CLK & 1PPS are provided by function
generator(Tektronix AFG1012) to each B210
- center freq  2.4GHz
- samping rate 4MHz

In GRC
- 2 separate USRP Source for each B210, settings are below
 - Sync option   unknown pps
 - Clock Source  External
 - Time Source   External
 - Num Channels  2

I generated python code by GRC and then I added custom timing adjustment
code.
```
 self.uhd_usrp_source_0.set_time_next_pps(uhd.time_spec(0))
 self.uhd_usrp_source_1.set_time_next_pps(uhd.time_spec(0))
 time.sleep(1.0)

 start_time = uhd.time_spec(5.0)
 self.uhd_usrp_source_0.set_start_time(start_time)
 self.uhd_usrp_source_1.set_start_time(start_time)
```
===

I believe that I am following all the instructions what I found in web.
but, when I tried to check that sampling timing is exactly matched or not
by dumping all the samples(connect File Sink), I found sampling gaps
between 2 B210.

While testing, I confirmed that
- there are no overflow,
- start timing would be exactly the same(using Tag Debug to confirm)

I observed that
- gaps looks fixed size during running
- gaps are slightly different every time


Please let me know what I am missing.


Thanks,
Matsu
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] 2 N200 MIMO system phase offset varies with frequency, have used timed_command with tune and also integer-N Tuning per Marcus M post of Feb 17, 2016

2017-10-25 Thread Marcus D. Leech via USRP-users

On 10/25/2017 03:16 AM, John Shields wrote:
Thanks very much Marcus for the thorough explanations. I looked at the 
phase change with frequency to see if there was a fixed delay and 
there didn’t appear to be but, effectively, the MIMO cable induces a 
frequency dependent uncorrected phase offset  which is eminently 
understandable but would appear to make a mockery of ‘MIMO’ claims. I 
realised that there will always be a phase offset but was disappointed 
by the magnitude as measured by the complex conjugate of both signals, 
a complex_to_arg block and decimating the result by 1K and plotting on 
Qt GUI Time Sink.
In commercial MIMO applications, the implementation corrects for 
phase-offset error, because it is (reasonably) expected that there will 
always be
  some amount of phase offset.  It's inevitable for there to be 
*some*.   For example, the DRAO synthesis array uses hardware to measure the
  phase length of each of their cables, and corrects for 
thermal-expansion effects in real-time.  Since phase-offset error (and 
drift) extends outwards
  away from the USRP envelope, it's not realistic to expect that all 
such effects are accounted for in the hardware (at least, not without a much

  higher price-tag).

It would appear that if I wanted to try to get as close to zero phase 
offset (to correct for non-zero MIMO cable length at least), then I 
need an Octoclock-G but I don’t have the nearly  $NZD 3000.00 it costs 
so I wonder if there is a ‘cheap’ way to convert my existing GPSDO 
board into an Octoclock-G? I only need to be able to buffer the signal 
for 2 USRPs.
You could just try splitting the signal two ways.   Myself, I buffer 
such signals with 74HC04 inverters, but I'm handy with a soldering 
iron.  There are
  cheap GPSDOs out there now, so it's just a matter of buffering, and 
for only 2 units, you might be able to get away with just splitting them.



Otherwise, I guess I could ‘calibrate’ the offset at various 
frequencies and then, at run time, apply a phase correction to one leg 
based on the fc? Seems a little inelegant.
That is *PRECISELY* what most MIMO applications do, and folks doing 
beam-forming, etc.  There will ALWAYS be some amount of phase offset--
  some of it fixed, some of it variable.   It is a *systemic* 
imperfection, which means that it has to be corrected for that account 
for all the systemic

  contributions, including hardware entirely outside of the USRP.



Kind Regards,
   John
*From:* Marcus D. Leech 
*Sent:* Wednesday, October 25, 2017 1:20 PM
*To:* John Shields 
*Cc:* usrp-users 
*Subject:* Re: [USRP-users] 2 N200 MIMO system phase offset varies 
with frequency, have used timed_command with tune and also integer-N 
Tuning per Marcus M post of Feb 17, 2016

On 10/24/2017 07:45 PM, John Shields wrote:

Thanks Marcus,
So it appears that the synching of the SBX 
LOs doesn’t work; or perhaps I should say, it doesn’t work during my 
measurement period? The integer-N tuning doesn’t work either.
I can say that, with some level of precision, 
the phase is fairly constant with center-frequency but if, for 
example, I had a 5 MHz spectrum how could I ‘correct for that’? I 
be;ieve that there is the whole Hilbert transform issue when you wish 
to translate the phase/frequency of a band of signals to a different 
one –is that what I should use?
From my point of view, there is quite a 
misinterpretation of what ‘synchronistation’ means; in particular for 
SBXs and their LOs which, as advertised, are supposed to be capable 
of such operation with a few simple Python commands!.
Realising that you would/should not express 
some shortcoming in the SBX,N200,MIMO in an Ettus product , if there 
is, I would dearly like to know from someone from Ettus Purely 
from an outside point of view, I thought that the “ we’ll transfer 
the Time Of Day contents to the Mate over MIMO cable ” doesn’t 
actually mean that they are in ‘real time’ synch, from my old DMS-100 
days bit was willing to go along with the theory. Seriously, I have 
no issue with that but just want to know how to get 2 N200r4 streams 
with OB GPSDO & MIMO cable ‘synchronised’
   I would love (but be embarrassed) to be told, 
that as a dummy, I made this mistake but in over a month of work I 
have not been able to establish that.

  Kind Regards,
   John

Set up a test transmitter in the far-field of your two antenna.

With everything synchronized the way you think it should be, plot the 
low-pass-filtered (and decimate to taste) result of a conjugate 
multiply of
  the two sides.   This should produce a straight line, with small 
amounts of noise.   If it just produces random walks all over the 
place, the two

  

Re: [USRP-users] Buffer overflow tips

2017-10-25 Thread Андрій Хома via USRP-users
Continuation here:
http://ettus.80997.x6.nabble.com/USRP-users-libusb-uses-only-one-thread-tc7609.html

2017-10-10 20:14 GMT+03:00 Андрій Хома :

> Hello,
> I apologize for my terrible English,
> I have a problem with buffer overflow.
> Tell me, please, what optimization measures do you use when working?
>
> How the problem is observed:
> I have 6 b205mini devices, and I noticed that I was up to a maximum of
> 205MHz per second. ie, either 5 devices on 41 MHz, or 6 devices on 34
> MHz. If more - overflow.
>
> What I undertook:
> First I used the foul language, then I tried to write my solution in C ++,
> which allowed me to throw read and write operations into separate streams.
> The thread doing the reading basically only does what it expects at the
> end of the blocking function rx_stream-> recv, and then quickly throws the
> resulting buffer into the write queue.
> The write stream waits until a new buffer appears in the write queue,
> otherwise it sleeps for several milliseconds. If the buffer is located,
> it immediately writes it to the named FIFO file.
> It turns out that these two streams can not interfere with each other. At
> the same time, as a bonus, additional insurance is obtained in the form of
> a queue of buffers if the receiving party (which reads named FIFO files and
> does nothing) will not succeed.
> Beefers who spin in the queue are re-used, which means that no extra time
> is spent to create them.
>
> Next, I set the priority
> uhd :: set_thread_priority_safe (1.0, true);
> All this manipulation allowed me, in some way, to "undo" the time of the
> onset of overflow. But not at the expense of the queue of buffers: when
> the overflow comes it is clear that the queue has not overflowed, the
> receiving party manages to process incoming data to it.
>
> And in the implementation of the foul language, and in the implementation
> of C ++, I do not observe any strong consumption of CPU resources. It
> turns out, the processor manages.
>
> Further, if I in C ++ do not write the result (the stream writing data to
> files - will not write them =)) - then I get a new limit value - now the
> limit is already ~ 270MHz.
> How so?
>
> Therefore, the question can be formulated as follows:
> 1 Why does the limit in 205 MHz arise? Really because of the name of the
> model?))
> 2 If the limit exists, then why does C ++ implementation still give some
> "bonus" in the form of a more recent overflow occurrence? What can be the
> reason for this limit and how to understand its nature? How to explain
> the weak CPU consumption?
>
> And yet, as I said at the beginning: all possible good practices are
> interesting when working with USRP
>
> For additional information:
> The motherboard Z10PE-D16 WS (can it matter in the chipset?)
> Intel xeon e5-2430v4 processor
> Memory DDR4 1866
>
> Thanks for taking the time,
> I will be glad to receive a response,
> With all respect, Andrew.
>
___
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com


Re: [USRP-users] 2 N200 MIMO system phase offset varies with frequency, have used timed_command with tune and also integer-N Tuning per Marcus M post of Feb 17, 2016

2017-10-25 Thread John Shields via USRP-users
Thanks very much Marcus for the thorough explanations. I looked at the phase 
change with frequency to see if there was a fixed delay and there didn’t appear 
to be but, effectively, the MIMO cable induces a frequency dependent 
uncorrected phase offset  which is eminently understandable but would appear to 
make a mockery of ‘MIMO’ claims. I realised that there will always be a phase 
offset but was disappointed by the magnitude as measured by the complex 
conjugate of both signals, a complex_to_arg block and decimating the result by 
1K and plotting on Qt GUI Time Sink.

It would appear that if I wanted to try to get as close to zero phase offset 
(to correct for non-zero MIMO cable length at least), then I need an 
Octoclock-G but I don’t have the nearly  $NZD 3000.00 it costs so I wonder if 
there is a ‘cheap’ way to convert my existing GPSDO board into an Octoclock-G? 
I only need to be able to buffer the signal for 2 USRPs.

Otherwise, I guess I could ‘calibrate’ the offset at various frequencies and 
then, at run time, apply a phase correction to one leg based on the fc? Seems a 
little inelegant.

Kind Regards,

   John

From: Marcus D. Leech
Sent: Wednesday, October 25, 2017 1:20 PM
To: John Shields
Cc: usrp-users
Subject: Re: [USRP-users] 2 N200 MIMO system phase offset varies with 
frequency, have used timed_command with tune and also integer-N Tuning per 
Marcus M post of Feb 17, 2016

On 10/24/2017 07:45 PM, John Shields wrote:

  Thanks Marcus,
  So it appears that the synching of the SBX LOs 
doesn’t work; or perhaps I should say, it doesn’t work during my measurement 
period? The integer-N tuning doesn’t work either.

  I can say that, with some level of precision, the 
phase is fairly constant with center-frequency but if, for example, I had a 5 
MHz spectrum how could I ‘correct for that’? I be;ieve that there is the whole 
Hilbert transform issue when you wish to translate the phase/frequency of a 
band of signals to a different one –is that what I should use?

  From my point of view, there is quite a 
misinterpretation of what ‘synchronistation’ means; in particular for SBXs and 
their LOs which, as advertised, are supposed to be capable of such operation 
with a few simple Python commands!.

  Realising that you would/should not express some 
shortcoming in the SBX,N200,MIMO in an Ettus product , if there is, I would 
dearly like to know from someone from Ettus Purely from an outside point of 
view, I thought that the “ we’ll transfer the Time Of Day contents to the Mate 
over MIMO cable ” doesn’t actually mean that they are in ‘real time’ synch, 
from my old DMS-100 days bit was willing to go along with the theory. 
Seriously, I have no issue with that but just want to know how to get 2 N200r4 
streams with OB GPSDO & MIMO cable ‘synchronised’

 I would love (but be embarrassed) to be told, that as 
a dummy, I made this mistake but in over a month of work I have not been able 
to establish that.

Kind Regards,

 John

Set up a test transmitter in the far-field of your two antenna.

With everything synchronized the way you think it should be, plot the 
low-pass-filtered (and decimate to taste) result of a conjugate multiply of
  the two sides.   This should produce a straight line, with small amounts of 
noise.   If it just produces random walks all over the place, the two
  oscillators aren't locked to the same reference.

My point about component tolerances is that they'll have some group-delay that 
isn't perfectly matched on both sides, even if things like the
  LO are running in-phase, the analog pathways won't necessarily have precisely 
the same group delay on the two sides.  Just like two random
  pieces of coax that are cut to the same length won't, necessarily, have 
precisely the same phase length.   This effect gets worse with frequency.

Further, in radio astronomy applications, the coherence bandwidth is, 
technically speaking, infinitely small, due to the emission mechanisms.
  But in *practice* a significant fractional bandwidth is possible without 
having to channelize the input bandwidth.

The *other* issue, that seems to be causing consternation, is the ability to 
predict what the phase-offset between the two sides will be upon restart
  of the flow-graph in the presence of the various bits of hocus-pocus (timed 
commands, etc) to try for consistent phase offsets every time you
  start streaming.  It sounds like you have that, but the offset changes 
depending on tuned frequency.   I'd expect that.  Both due to analog-component
  group-delay variability, and because the MIMO cable is not of zero length.  I 
don't believe that there is *ANY* length compensation, so one N2XX will
  receive the reference clock at a "closer" phase distance than the other