On 10/29/16 4:49 AM, Attila Kinali wrote:
On Fri, 28 Oct 2016 23:01:52 -0700
Hal Murray <[email protected]> wrote:
[email protected] said:
That single-chip version is going to have a *LOT* less (and less variable)
latency than an SDR.
Latency isn't an issue as long as it is known so that you can correct for it.
Has anybody measured the jitter through SDR and/or tried to reduce it? I'd
expect that even if you counted cycles and such there would still be jitter
from not being able to reproduce cache misses and interrupts.
Should not be too high. If Jeff Sherman's and Robert Jörden's paper[1]
is any indication, then the jitter should be dominated by the jitter
of the ADC and its reference oscillator. So sub-ps, order of 100fs jitter
should be possible with proper design. Long term drift is another issue
and I have not completely figured out what are the contributors there.
Temperature stabilizing for sure helps, but it doesn't seem to be the
only effect.
Well, that's "jitter in the original samples" which can be very low, as
you describe. But I would interpret the original question as "jitter
*through* an SDR" which implies that we're looking at the timing of
output vs input.
Consider an SDR which receives a RF signal that's BPSK modulated, and
puts out a stream of data bits on a wire (as opposed to dumping into a
file or network connection)- and you want to look at an eye diagram of
the output.
Attila Kinali
[1] "Oscillator metrology with software defined radio",
by Jeff Sherman and Robert Jörden, 2016
http://dx.doi.org/10.1063/1.4950898
http://arxiv.org/abs/1605.03505
_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.