Hi Warren,
I arrive a bit late to this discussion, but I hope I can help. I guess the
reason for using only one edge is based on the fact that WR is originally
designed to measure the phase between a decoded data clock and a system
clock. The problem is that this decoded data clock is locked to
...The problem is that this decoded data clock is locked to the incoming
data by means of a PFD in the Spartan6/Virtex6 GTP. The PFD normaly only
looks at rising edges, so any change in the clock duty cycle will translate
in a phase change in the falling edge and not in the rising edge. I am
In message 9A96CAA5BA7B467D9A106EC858EA0DCE@pc52, Tom Van Baak writes:
3) Every instant on a sine wave is actually a data point, not just
the zero crossing(s). So in reality there is near infinite information
available.
Sorry, but no.
If you tell me it is a sine and give me the time
There are two ways that both positive and negative slopes could be used,
that is, with the input clocks and/or with the reference clock.
The PRU on the BBB is not really fast enough to identify the edge
direction at a 10mhz rate, so I only collect state changes in real time
and then sort it
Hi
On Oct 23, 2014, at 2:01 AM, Poul-Henning Kamp p...@phk.freebsd.dk wrote:
In message 9A96CAA5BA7B467D9A106EC858EA0DCE@pc52, Tom Van Baak writes:
3) Every instant on a sine wave is actually a data point, not just
the zero crossing(s). So in reality there is near infinite
Poul said;
If you tell me it is a sine and give me the time of two zero crossings
I can tell you everything there has or ever will be to know ...
just to add a bit more nut picking on comment #3.
When talking about sub picosecond per second time nut type accuracy, there
is no such thing as
Lots of interesting responses,
but I did not see any posted that answered the original question:
Is the CERN method described in the paper the best way to make a state of
the art femtosecond DDMDT?
www.ee.ucl.ac.uk/lcs/previous/LCS2011/LCS1136.pdf
Restating
Assuming it is kept Digital, and not
Depends on what dominant noises you try to measure. Phase white and
phase flicker noise depends on bandwidth, and averaging provides
filtering effects that effect those.
Filtering will also effect systematic signals, but you should never use
ADEV for such noises, it's a bad estimator for
Hi
Back in the early days of ADEV, the standard HP gear had a 60 KHz bandwidth.
The question that came up *every* FCS and PTTI was “why does it change with
bandwidth / should we spec the bandwidth?”. This went on for at least 15 years
before anybody really came up with a “use a narrow
The recent discussions about the simple digital mixer got me thinking about
the performance vs. complexity trade offs when measuring accurate, high
resolution, phase drift differences between two oscillators.
It would seem to me, that using both the positive and negative slope edges
of the
Which is why the new style instruments sampling the waveforms with a
common clock and then downsampling digitally until churning out phase
data for further processing can achieve such a good measurement floor.
See Sam Steins papers.
For some applications the DDMTD approach is pretty amazing
Here is an extreme example of throwing away useful data for the sake of
simplicity:
When measuring phase drift of a 10 MHz osc using just a 1PPS signal,
19,999,999 other possible data points are being discarded.
Using all possible data points could decrease the noise floor considerably.
In message 0D2DB2B131E5461BB087713B3E49BEA9@pc52, Tom Van Baak writes:
Consider measuring a 10811 for a year. Do you need to follow its
phase or frequency every 100 ns? Or second? Or minute? Maybe as
little as one data point per day is more than enough to make a
perfectly accurate
t...@leapsecond.com said:
2) For long-term analysis, even 1 PPS is overkill. Having more data may not
improve your oscillator drift plot at all. This is because the frequency is
a moving target. Ever more precise measurements of a moving target are
wasted; they don't add any clarity to the
You need more than 1 sample per day for ADEV plots left of 100,000 K seconds.
Correct. What I sometimes do is collect data for just a few minutes at 1000
samples per second. That's enough to make an ADEV plot for tau 0.001 to 1 or 10
seconds. Then I'll collect data for a couple of days at 1
3) Every instant on a sine wave is actually a data point, not just
the zero crossing(s). So in reality there is near infinite information
available.
Sorry, but no.
If you tell me it is a sine and give me the time of two zero crossings
I can tell you everything there has or ever will be to
Even more effective would be to sample the entire 10MHz waveform instead of
just the zero crossing. By doing a best fit of the entire waveform, you should
be able to estimate the zero crossing with much greater precision because now
the noise is averaged over the entire waveform instead of a
Hi
There are a number of papers out there that talk about decimation vs averaging
for ADEV. They have various data sets processed by both techniques.
Bottom line - decimation does a better job than averaging for ADEV. At least
that’s true if you want the result to resemble the “real” ADEV of
Hi
The more you “curve fit” or “average” the more you are filtering the data.
Filtering does indeed impact the ADEV both at short tau’s and longer tau’s. You
need to be very careful if you filter or you will mess up the data.
Bob
On Oct 22, 2014, at 7:42 PM, Didier Juges shali...@gmail.com
Hi
On Oct 22, 2014, at 5:57 PM, Hal Murray hmur...@megapathdsl.net wrote:
t...@leapsecond.com said:
2) For long-term analysis, even 1 PPS is overkill. Having more data may not
improve your oscillator drift plot at all. This is because the frequency is
a moving target. Ever more precise
20 matches
Mail list logo