I have a problem that I think I have a workaround for, but I would like to 
understand what is going on if I could.  I am guessing that it is a DSPish 
issue with the way that the DDC is implemented, but I can't quite figure it out.
 
My design had been working on an E310 and I was trying to move it over to the 
X310 and just kept running into issues.  I finally narrowed it down to the 
large DDC delta.  On the E310, I was taking a 45MHz sample rate and having an 
OOT sample rate of 50khz (I am re-tuning via the CORDIC as well so I can look 
at a signal of interest).  I tried to bump up the sample rate some more, but it 
had issues.  I ignored it at the time, but I have a feeling that it is because 
of the same issue.
 
When I moved to the X310, I want to use the max sample rate I can, but because 
I was having issues, I dropped down to 120MHz and still had issues.  Things 
would run for a bit, then I get the dreaded timeout 0 error and things are shot 
from there.
 
Stumbling on a 3 year old post, it sent me down some different testing rabbit 
hole: 
http://lists.ettus.com/pipermail/usrp-users_lists.ettus.com/2015-September/043819.html
 
So what I did was to have two DDCs and take the first from 120MHz down to 
45MHz, and then the second from 45MHz to 50kHz.   And low-and-behold, 
everything works again.
 
So I can do this, but it certainly will start to increase the FPGA resource 
usage.
 
In the end, I am thinking that it is what it is, but I would like to understand 
what is going on here.  I was thinking that it probably has something to do 
with the larger drop needing to go through more paths and the buffers slowly 
fill up until it can't handle it anymore.
_______________________________________________
USRP-users mailing list
USRP-users@lists.ettus.com
http://lists.ettus.com/mailman/listinfo/usrp-users_lists.ettus.com

Reply via email to