Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-08-04 Thread Magnus Danielson

Poul-Henning,

On 08/03/2015 01:07 AM, Poul-Henning Kamp wrote:


In message 55bdb002.8060...@rubidium.dyndns.org, Magnus Danielson writes:


For true white PM *random* noise you can move your phase samples around,
but you gain nothing by bursting them.


I gain nothing mathematically, but in practical terms it would be
a lot more manageable to record an average of 1000 measurements
once per second, than 1000 measurements every second.


Yes, averaging them in blocks and only send the block result is indeed a 
good thing, as long as we can establish the behavior and avoid or remove 
any biases introduced. Bursting them in itself does not give much gain, 
as the processing needs to be done anyway and even rate works just as 
well. A benefit of a small burstiness is that you can work on beat-notes 
not being multiple of the tau0 you want, say 1 s.


As in any processing, cycle-unwrapping needs to be done, as it would 
waste the benefit.


For random noise, the effect of the burst or indeed aggregate into 
blocks of samples is just the same as doing overlapping processing as 
was done for ADEV in the early 70thies as a first step towards better 
confidence intervals. For white noise, there is no correlation between 
any samples, so you can sample them at random. However, for ADEV the 
point is to analyze this for a particular observation interval, so for 
each measure being squared, the observation interval needs to be 
respected. For the colored noises, there is a correlation between the 
samples, and it is the correlation of the observation interval that main 
filtering mechanism of the ADEV. However, since the underlying source is 
noise, you can use any set of phase-tripplets to add to the accumulated 
variance. The burst or block average, provides such a overlapping 
processing mechanism.


However, for systematic noise such as the counter's first order time 
quantization (thus ignoring any fine-grained variations) will interact 
in different ways with burst-sampling depending on the burst length. 
This is the part we should look at to see how we best achieve a 
reduction of that noise in order to quicker reach the actual signal and 
reference noise.



For any other form of random
noise and for the systematic noise, you alter the total filtering
behavior [...]


Agreed.


I wonder if not the filter properties of the burst average is altered 
compared to an evenly spread block, such that when treated as MDEV 
measures we have a difference. The burst filter-properties should be 
similar to that of PWM over the burst repetition rate.


I just contradicted myself. I will come back to this topic, one has to 
be careful as filter properties will color the result and biases can 
occur. Most of these biases is usually not very useful, but the MDEV 
averaging is, if used correctly.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-08-02 Thread Magnus Danielson

Hi Poul-Henning,

On 08/01/2015 10:32 PM, Poul-Henning Kamp wrote:


In message 49c4ccd3-09ce-48a4-82b8-9285a4381...@n1k.org, Bob Camp writes:


The approach you are using is still a discrete time sampling
approach. As such it does not directly violate the data requirements
for ADEV or MADEV.  As long as the sample burst is much shorter
than the Tau you are after, this will be true. If the samples cover  1%
of the Tau, it is very hard to demonstrate a noise spectrum that
this process messes up.


So this is where it gets interesting, because I suspect that your
1% lets play it safe threshold is overly pessimistic.

I agree that there are other error processes than white PM which
would get messed up by this and that general low-pass filtering
would be much more suspect.

But what bothers me is that as far as I can tell from real-life
measurements, as long as the dominant noise process is white PM,
even 99% Tau averaging gives me the right result.

I have tried to find a way to plug this into the MVAR definition
based on phase samples (Wikipedia's first formula under Definition)
and as far as I can tell, it comes out the same in the end, provided
I assume only white PM noise.


I put that formula there, and I think Dave trimmed the text a little.

For true white PM *random* noise you can move your phase samples around, 
but you gain nothing by bursting them. For any other form of random 
noise and for the systematic noise, you alter the total filtering 
behavior as compared to AVAR or MVAR, and it is through altering the 
frequency behavior rhat biases in values is born. MVAR itself has biases 
compared to AVAR for all noises due to its filtering behavior.


The bursting that you propose is similar to the uneven spreading of 
samples you have in the dead-time sampling, where the time between the 
start-samples of your frequency measures is T, but the time between the 
start and stop samples of your frequency measures is tau. This creates a 
different coloring of the spectrum than if the stop sample of the 
previous frequency measure also is the start sample of the next 
frequency measure. This coloring then creates a bias-ing depending on 
the frequency spectrum of the noise (systematic or random), so you need 
to correct it with the appropriate biasing function. See the Allan 
deviation wikipedia article section of biasing functions and do read the 
original Dave Allan Feb 1966 article.


For doing what you propose, you will have to define the time properities 
of the burst, so that woould need to have the time between the bursts 
(tau) and time between burst samples (alpha). You would also need to 
define the number of burst samples (O). You can define a bias function 
through analysis. However, you can sketch the behavior for various 
noises. For white random phase noise, there is no correlation between 
phase samples, which also makes the time between them un-interesting, so 
we can re-arrange our sampling for that noise as we seem fit. For other 
noises, you will create a coloring and I predict that the number of 
averaged samples O will be the filtering effect, but the time between 
samples should not be important. For systematic noise such as the 
quantization noise, you will again interact, and that with a filtering 
effect.


At some times the filtering effect is useful, see MVAR and PVAR, but for 
many it becomes an uninteresting effect.



But I have not found any references to this optimization anywhere
and either I'm doing something wrong, or I'm doing something else
wrong.

I'd like to know which it is :-)



You're doing it wrong. :)

PS. At music festival, so quality references is at home.

Cheers.
Magnus
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-08-02 Thread Poul-Henning Kamp

In message 55bdb002.8060...@rubidium.dyndns.org, Magnus Danielson writes:

For true white PM *random* noise you can move your phase samples around, 
but you gain nothing by bursting them.

I gain nothing mathematically, but in practical terms it would be
a lot more manageable to record an average of 1000 measurements
once per second, than 1000 measurements every second.

For any other form of random 
noise and for the systematic noise, you alter the total filtering 
behavior [...]

Agreed.


-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-08-01 Thread Poul-Henning Kamp

In message 55bc202f.6060...@gmail.com, Daniel Mendes writes:

Can someone explain in very simple terms what this graph means?

My current interpretation is as following:

For a 100Hz input, if you look to your signal in 0.1s intervals, 
there´s about 1.0e-11 frequency error on average (RMS average?)

Close: To a first approximation MVAR is the standard-deviation of
the frequency, as a function of the time-interval you measure the
frequency over.

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-08-01 Thread Bob Camp
Hi


If on the same graph you plotted the “low pass filter” response of your sample 
/ average
process, it would show how much / how little impact there likely is. It’s not 
any different than 
a standard circuit analysis. The old “poles at 10X frequency don’t count” rule. 
No measurement 
we ever make is 100% perfect, so a small impact does not immediately rule out 
an approach. 

Your measurement gets better by some number related to the number of samples. 
It might be
square root of N, it could be something else. If it’s sqrt(N), a 100 sample 
burst is getting you an
order of magnitude better number when you sample. You could go another 10X at 
10K samples.
A very real question comes up about “better” in this case. It probably does not 
improve accuracy, 
resolution, repeatability, and noise floor all to the same degree. At some 
point it improves some 
of those and makes your MADEV measurement less accurate. 

=

Because we strive for perfection in our measurements, *anything* that impacts 
their accuracy is suspect. 
A very closely related (and classic) example is lowpass filtering in front of 
an ADEV measurement.
People have questioned doing this back at least into the early 1970’s. There 
may have been earlier questions,
if so I was not there to hear them. It took about 20 years to come up with a 
“blessed” filtering approach 
for ADEV. It still is suspect to some because it (obviously) changes the ADEV 
plot you get at the shortest tau. 
That kind of decades long debate makes getting a conclusive answer to a 
question like this unlikely. 

=

The approach you are using is still a discrete time sampling approach. As such 
it does not directly violate
the data requirements for ADEV or MADEV.  As long as the sample burst is much 
shorter than the Tau you 
are after, this will be true. If the samples cover  1% of the Tau, it is very 
hard to demonstrate a noise 
spectrum that this process messes up. Put in the context of the circuit pole, 
you now are at 100X the design 
frequency. At that point it’s *way* less of a filter than the sort of  vaguely 
documented ADEV pre-filtering 
that was going on for years and years ….. (names withheld to protect the guilty 
…)

Is this in a back door way saying that these numbers probably are (at best) 1% 
of reading sorts of data?
Yes indeed that’s an implicit part of my argument. If you have devices that 
repeat to three digits on multiple 
runs, this may not be the approach you would want to use. In 40 years of doing 
untold thousands these 
measurements I have yet to see devices (as opposed to instrument / measurement 
floors) that repeat to 
under 1% of reading. 

Bob 
 
 On Jul 31, 2015, at 5:04 PM, Poul-Henning Kamp p...@phk.freebsd.dk wrote:
 
 
 
 If you look at the attached plot there are four datasets.
 
 And of course...
 
 Here it is:
 
 -- 
 Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
 p...@freebsd.org | TCP/IP since RFC 956
 FreeBSD committer   | BSD since 4.3-tahoe
 Never attribute to malice what can adequately be explained by incompetence.
 allan.png___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-08-01 Thread Bob Camp
Hi

If you take more data (rate is faster) the noise floor of the data set at 1 
second goes
down as square root N. (speeds up 10X, noise down by ~1/3)

Past a point, the resultant plot is not messed up by the process involved in 
the sampling.

Bob

 On Jul 31, 2015, at 9:26 PM, Daniel Mendes dmend...@gmail.com wrote:
 
 
 Ok... time to show my lack of knowledge in public and ask a very simple 
 question:
 
 Can someone explain in very simple terms what this graph means?
 
 My current interpretation is as following:
 
 For a 100Hz input, if you look to your signal in 0.1s intervals, there´s 
 about 1.0e-11 frequency error on average (RMS average?)
 
 How far from the truth am I?
 
 Daniel
 
 
 
 Em 31/07/2015 18:04, Poul-Henning Kamp escreveu:
 
 
 If you look at the attached plot there are four datasets.
 And of course...
 
 Here it is:
 
 
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.
 
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-08-01 Thread Poul-Henning Kamp

In message 49c4ccd3-09ce-48a4-82b8-9285a4381...@n1k.org, Bob Camp writes:

 The approach you are using is still a discrete time sampling
 approach. As such it does not directly violate the data requirements
 for ADEV or MADEV.  As long as the sample burst is much shorter
 than the Tau you are after, this will be true. If the samples cover  1%
 of the Tau, it is very hard to demonstrate a noise spectrum that
 this process messes up.

So this is where it gets interesting, because I suspect that your
1% lets play it safe threshold is overly pessimistic.

I agree that there are other error processes than white PM which
would get messed up by this and that general low-pass filtering
would be much more suspect.

But what bothers me is that as far as I can tell from real-life
measurements, as long as the dominant noise process is white PM,
even 99% Tau averaging gives me the right result.

I have tried to find a way to plug this into the MVAR definition
based on phase samples (Wikipedia's first formula under Definition)
and as far as I can tell, it comes out the same in the end, provided
I assume only white PM noise.

But I have not found any references to this optimization anywhere
and either I'm doing something wrong, or I'm doing something else
wrong.

I'd like to know which it is :-)

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-08-01 Thread Bob Camp
Hi

 On Aug 1, 2015, at 4:32 PM, Poul-Henning Kamp p...@phk.freebsd.dk wrote:
 
 
 In message 49c4ccd3-09ce-48a4-82b8-9285a4381...@n1k.org, Bob Camp writes:
 
 The approach you are using is still a discrete time sampling
 approach. As such it does not directly violate the data requirements
 for ADEV or MADEV.  As long as the sample burst is much shorter
 than the Tau you are after, this will be true. If the samples cover  1%
 of the Tau, it is very hard to demonstrate a noise spectrum that
 this process messes up.
 
 So this is where it gets interesting, because I suspect that your
 1% lets play it safe threshold is overly pessimistic.


I completely agree with that. It’s more a limit that lets you do *some* 
sampling but steers clear
of any real challenge to the method. 

 
 I agree that there are other error processes than white PM which
 would get messed up by this and that general low-pass filtering
 would be much more suspect.
 
 But what bothers me is that as far as I can tell from real-life
 measurements, as long as the dominant noise process is white PM,
 even 99% Tau averaging gives me the right result.

Indeed a number of people noticed this with low pass filtering …. back 
a number (~1975) of years ago….

The key point being that white PM is the dominant noise process. If you have a 
discrete spur in there, 
it will indeed make a difference. You can fairly easily construct a sample 
averaging process that drops a zero on a spur 
(average over a exactly a full period …). How that works with discontinuous 
sampling is 
not quite as clean as how it works with a continuous sample (you now average 
over N out of M periods… ). 

 
 I have tried to find a way to plug this into the MVAR definition
 based on phase samples (Wikipedia's first formula under Definition)
 and as far as I can tell, it comes out the same in the end, provided
 I assume only white PM noise.

Which is why *very* sharp people debated filtering on ADEV for years before 
anything really 
got even partial settled.

 
 But I have not found any references to this optimization anywhere
 and either I'm doing something wrong, or I'm doing something else
 wrong.
 
 I'd like to know which it is :-)

Well umm …. errr …. *some* people have been known to simply document
what they do. They then demonstrate that for normal noise processes it’s not an 
issue. 
Do an “adequate” number of real world comparisons and then move on with it. 
There are some pretty big names in the business that have gone that route. Some
of them are often referred to with three and four letter initials …. In this 
case probably note
the issue (or advantage !!) with discrete spurs and move on. 

If you are looking for real fun, I would dig out Stein’s paper on pre-filtering 
and ADEV. That would 
give you you a starting point and a framework to extend to MADEV. 

Truth in lending: The whole discrete spur thing described above is entirely 
from work on a very similar 
problem. I have *not* proven it with your sampling approach. 

Bob


 
 -- 
 Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
 p...@freebsd.org | TCP/IP since RFC 956
 FreeBSD committer   | BSD since 4.3-tahoe
 Never attribute to malice what can adequately be explained by incompetence.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-08-01 Thread Daniel Mendes


Ok... time to show my lack of knowledge in public and ask a very simple 
question:


Can someone explain in very simple terms what this graph means?

My current interpretation is as following:

For a 100Hz input, if you look to your signal in 0.1s intervals, 
there´s about 1.0e-11 frequency error on average (RMS average?)


How far from the truth am I?

Daniel



Em 31/07/2015 18:04, Poul-Henning Kamp escreveu:




If you look at the attached plot there are four datasets.

And of course...

Here it is:



___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-07-31 Thread Poul-Henning Kamp


If you look at the attached plot there are four datasets.

And of course...

Here it is:

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-07-31 Thread Poul-Henning Kamp
 Shouldn't the quantization/ measurement noise power be easy to measure?

So I guess I havn't explained my idea well enough yet.

If you look at the attached plot there are four datasets.

100Hz, 10Hz and 1Hz are the result of collecting TI measurements
at these rates.

As expected the X^(3/2) slope white PM noise is reduced by sqrt(10)
every time we increase the measurement frequency by a factor 10.

The 1Hz 10avg dataset is where the HP5370 does 10 measurements
as fast as possible, once per second, and returns the average.

The key observation here is I get the same sqrt(10) improvement
without having to capture, store and process 10 times as many
datapoints.

Obviously I learn nothing about the Tau [0.1 ... 1.0] range, but
as you can see, that's not really a loss in this case.

*If* this method is valid, possibly conditioned on paying attention
to the counters STDDEV calculation...

and *If* we can get the turbo-5370 to give us an average of 5000
measurements once every second.

*Then* the PM noise curtain drops from 5e-11 to 7e-13 @ Tau=1s


Poul-Henning

PS: The above plot is made by processing a single 100 Hz raw data file
which is ny new HP5065 against an GPSDO.

-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-07-31 Thread Magnus Danielson

Hi James,

On 07/30/2015 06:34 PM, James Peroulas wrote:

My understanding is that MVAR(m*tau0) is equivalent to filtering the phase
samples x(n) by averaging m samples to produce x'(n)
[x'(n)=1/m*(x(n)+x(n+1)..x(n+m-1))] and then calculating AVAR for
tau=m*tau0 on the filtered sequence. Thus, MVAR already performs an
averaging/ lowpass filtering operation. Adding another averaging filter
prior to calculating MVAR would seem to be defining a new type of stability
measurement.


Yes, fhat's how MVAR works.


Not familiar with the 5370... Is it possible to configure it to average
measurements over the complete tau0 interval with no dead time between
measurements? Assuming the 5370 can average 100 evenly spaced measurements
within the measurement interval (1s?), calculating MVAR on the captured
sequence would produce MVAR(m*.01)) for m being a multiple of 100. i.e.,
tau0 here is actually .01, not 1, but values for MVAR(tau) for tau's less
than 1s are not available.


The stock 5370 isn't a great tool for this. The accelerator board that 
replaces the CPU and allows for us to add algorithms, makes the counter 
hardware much more adapted for this setup.



Shouldn't the quantization/ measurement noise power be easy to measure?
Can't it just be subtracted from the MVAR plot? I've done this with AVAR in
the past to produce 'seemingly' meaningful results (i.e. I'm not an expert).


You can curve-fit an estimation of that noise and remove it from the 
plot. For lower taus the confidence intervals will suffer in practice.



I calculated the PSD of x(n) and it was clear where the measurements were
being limited by noise (flat section at higher frequencies). From this I
was able to estimate the measurement noise power.


It is. Notice that some of it is noise and some is noise-like 
systematics from the quantization.



AVAR_MEASURED(tau)=AVAR_CUT(tau)+AVAR_REF(tau)+AVAR_MEAS(tau)

i.e. The measured AVAR is equal to the sum of the AVAR of the clock under
test (CUT), the AVAR of the reference clock, and the AVAR of the
measurement noise. If the reference clock is much better than the CUT
AVAR_REF(tau) can be ignored. AVAR_MEAS(tau) is known from the PSD of x(n)
and can be subtracted from AVAR_MEASURED(tau) to produce a better estimate
of AVAR_CUT(tau).

Depending on the confidence intervals of AVAR_MEASURED(tau) and the noise
power estimate, you can get varying degrees of cancellation. 10dB of
improvement seemed quite easy to obtain.


Using the Lambda counter approach, filtering with the average blocks of 
Modified Allan Variance, makes the white phase noise slope go 1/tau^3 
rather than 1/tau^2 as it is for normal Allan Variance. This means that 
the limiting slope of the white noise will cut over to the actual noise 
for lower tau. so that is an important tool already there. Also, it 
achieves it with known properties in confidence intervals. Using the 
Omega counter approach, you can get further improvements by about 1.25 
dB, which is then deemed optimal as the Omega counter method is a linear 
regression / least square method for estimating the frequency samples 
and then those is used for AVAR processing.


The next trick to pull is to do cross correlation of two independent 
channels, so that their noise does not correlate. This can help for some 
of it, but systematics can become a limiting factor.


Cheers,
Magnus



James


Message: 7

Date: Tue, 28 Jul 2015 21:51:07 +
From: Poul-Henning Kamp p...@phk.freebsd.dk
To: time-nuts@febo.com
Subject: [time-nuts] Modified Allan Deviation and counter averaging
Message-ID: 2884.1438120...@critter.freebsd.dk
Content-Type: text/plain; charset=us-ascii

Sorry this is a bit long-ish, but I figure I'm saving time putting
in all the details up front.

The canonical time-nut way to set up a MVAR measurement is to feed
two sources to a HP5370 and measure the time interval between their
zero crossings often enough to resolve any phase ambiguities caused
by frequency differences.

The computer unfolds the phase wrap-arounds, and calculates the
MVAR using the measurement rate, typically 100, 10 or 1 Hz, as the
minimum Tau.

However, the HP5370 has noise-floor in the low picoseconds, which
creates the well known diagonal left bound on what we can measure
this way.

So it is tempting to do this instead:

Every measurement period, we let the HP5370 do a burst of 100
measurements[*] and feed the average to MVAR, and push the diagonal
line an order of magnitude (sqrt(100)) further down.

At its specified rate, the HP5370 will take 1/30th of a second to
do a 100 sample average measurement.

If we are measuring once each second, that's only 3% of the Tau.

No measurement is ever instantaneous, simply because the two zero
crossings are not happening right at the mesurement epoch.

If I measure two 10MHz signals the canonical way, the first zero
crossing could come as late as 100(+epsilon) nanoseconds after the
epoch, and the second as much as 100(+epsilon) nanoseconds later

Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-07-30 Thread James Peroulas
My understanding is that MVAR(m*tau0) is equivalent to filtering the phase
samples x(n) by averaging m samples to produce x'(n)
[x'(n)=1/m*(x(n)+x(n+1)..x(n+m-1))] and then calculating AVAR for
tau=m*tau0 on the filtered sequence. Thus, MVAR already performs an
averaging/ lowpass filtering operation. Adding another averaging filter
prior to calculating MVAR would seem to be defining a new type of stability
measurement.

Not familiar with the 5370... Is it possible to configure it to average
measurements over the complete tau0 interval with no dead time between
measurements? Assuming the 5370 can average 100 evenly spaced measurements
within the measurement interval (1s?), calculating MVAR on the captured
sequence would produce MVAR(m*.01)) for m being a multiple of 100. i.e.,
tau0 here is actually .01, not 1, but values for MVAR(tau) for tau's less
than 1s are not available.

Shouldn't the quantization/ measurement noise power be easy to measure?
Can't it just be subtracted from the MVAR plot? I've done this with AVAR in
the past to produce 'seemingly' meaningful results (i.e. I'm not an expert).

I calculated the PSD of x(n) and it was clear where the measurements were
being limited by noise (flat section at higher frequencies). From this I
was able to estimate the measurement noise power.

AVAR_MEASURED(tau)=AVAR_CUT(tau)+AVAR_REF(tau)+AVAR_MEAS(tau)

i.e. The measured AVAR is equal to the sum of the AVAR of the clock under
test (CUT), the AVAR of the reference clock, and the AVAR of the
measurement noise. If the reference clock is much better than the CUT
AVAR_REF(tau) can be ignored. AVAR_MEAS(tau) is known from the PSD of x(n)
and can be subtracted from AVAR_MEASURED(tau) to produce a better estimate
of AVAR_CUT(tau).

Depending on the confidence intervals of AVAR_MEASURED(tau) and the noise
power estimate, you can get varying degrees of cancellation. 10dB of
improvement seemed quite easy to obtain.

James


Message: 7
 Date: Tue, 28 Jul 2015 21:51:07 +
 From: Poul-Henning Kamp p...@phk.freebsd.dk
 To: time-nuts@febo.com
 Subject: [time-nuts] Modified Allan Deviation and counter averaging
 Message-ID: 2884.1438120...@critter.freebsd.dk
 Content-Type: text/plain; charset=us-ascii

 Sorry this is a bit long-ish, but I figure I'm saving time putting
 in all the details up front.

 The canonical time-nut way to set up a MVAR measurement is to feed
 two sources to a HP5370 and measure the time interval between their
 zero crossings often enough to resolve any phase ambiguities caused
 by frequency differences.

 The computer unfolds the phase wrap-arounds, and calculates the
 MVAR using the measurement rate, typically 100, 10 or 1 Hz, as the
 minimum Tau.

 However, the HP5370 has noise-floor in the low picoseconds, which
 creates the well known diagonal left bound on what we can measure
 this way.

 So it is tempting to do this instead:

 Every measurement period, we let the HP5370 do a burst of 100
 measurements[*] and feed the average to MVAR, and push the diagonal
 line an order of magnitude (sqrt(100)) further down.

 At its specified rate, the HP5370 will take 1/30th of a second to
 do a 100 sample average measurement.

 If we are measuring once each second, that's only 3% of the Tau.

 No measurement is ever instantaneous, simply because the two zero
 crossings are not happening right at the mesurement epoch.

 If I measure two 10MHz signals the canonical way, the first zero
 crossing could come as late as 100(+epsilon) nanoseconds after the
 epoch, and the second as much as 100(+epsilon) nanoseconds later.

 An actual point of the measurement doesn't even exist, but picking
 with the midpoint we get an average delay of 75ns, worst case 150ns.

 That works out to one part in 13 million which is a lot less than 3%,
 but certainly not zero as the MVAR formula pressume.

 Eyeballing it, 3% is well below the reproducibility I see on MVAR
 measurements, and I have therefore waved the method and result
 through, without a formal proof.

 However, I have very carefully made sure to never show anybody
 any of these plots because of the lack of proof.

 Thanks to Johns Turbo-5370 we can do burst measurements at much
 higher rates than 3000/s, and thus potentially push the diagonal
 limit more than a decade to the left, while still doing minimum
 violence to the mathematical assumptions under MVAR.

 [*] The footnote is this: The HP5370 firwmare does not make triggered
 bust averages an easy measurement, but we can change that, in
 particular with Johns Turbo-5370.

 But before I attempt to do that, I would appreciate if a couple of
 the more math-savy time-nuts could ponder the soundness of the
 concept.

 Apart from the delayed measurement point, I have not been able
 to identify any issues.

 The frequency spectrum filtered out by the averaging is wy to
 the left of our minimum Tau.

 Phase wrap-around inside bursts can be detected and unfolded
 in the processing.

 Am I overlooking anything

Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-07-29 Thread Ole Petter Ronningen
*Very* much looking forward to some insight on this method - I've done
pretty much the same experiment with an E1740A (48.8ps resolution); trigger
from a z3805A PPS, start TI measurement on the first rising edge of the
z3805A 10Mhz (on channel 1) following the trigger, stop the TI measurement
on (i.e.) the 30th rising edge on whatever signal is on channel 2.  Capture
100 samples gap-free every trigger, and average as described.  It is
severely bandwidth-limited by the GPIB interface as to the number of
samples that can be collected on a per second basis, but the E1740A can
store up to ~500k samples, so it can all be batch processed after the
measurement is complete. It will be very interesting to see if the method
has any merit.

Ole

On Tue, Jul 28, 2015 at 11:51 PM, Poul-Henning Kamp p...@phk.freebsd.dk
wrote:

 Sorry this is a bit long-ish, but I figure I'm saving time putting
 in all the details up front.

 The canonical time-nut way to set up a MVAR measurement is to feed
 two sources to a HP5370 and measure the time interval between their
 zero crossings often enough to resolve any phase ambiguities caused
 by frequency differences.

 The computer unfolds the phase wrap-arounds, and calculates the
 MVAR using the measurement rate, typically 100, 10 or 1 Hz, as the
 minimum Tau.

 However, the HP5370 has noise-floor in the low picoseconds, which
 creates the well known diagonal left bound on what we can measure
 this way.

 So it is tempting to do this instead:

 Every measurement period, we let the HP5370 do a burst of 100
 measurements[*] and feed the average to MVAR, and push the diagonal
 line an order of magnitude (sqrt(100)) further down.

 At its specified rate, the HP5370 will take 1/30th of a second to
 do a 100 sample average measurement.

 If we are measuring once each second, that's only 3% of the Tau.

 No measurement is ever instantaneous, simply because the two zero
 crossings are not happening right at the mesurement epoch.

 If I measure two 10MHz signals the canonical way, the first zero
 crossing could come as late as 100(+epsilon) nanoseconds after the
 epoch, and the second as much as 100(+epsilon) nanoseconds later.

 An actual point of the measurement doesn't even exist, but picking
 with the midpoint we get an average delay of 75ns, worst case 150ns.

 That works out to one part in 13 million which is a lot less than 3%,
 but certainly not zero as the MVAR formula pressume.

 Eyeballing it, 3% is well below the reproducibility I see on MVAR
 measurements, and I have therefore waved the method and result
 through, without a formal proof.

 However, I have very carefully made sure to never show anybody
 any of these plots because of the lack of proof.

 Thanks to Johns Turbo-5370 we can do burst measurements at much
 higher rates than 3000/s, and thus potentially push the diagonal
 limit more than a decade to the left, while still doing minimum
 violence to the mathematical assumptions under MVAR.

 [*] The footnote is this: The HP5370 firwmare does not make triggered
 bust averages an easy measurement, but we can change that, in
 particular with Johns Turbo-5370.

 But before I attempt to do that, I would appreciate if a couple of
 the more math-savy time-nuts could ponder the soundness of the
 concept.

 Apart from the delayed measurement point, I have not been able
 to identify any issues.

 The frequency spectrum filtered out by the averaging is wy to
 the left of our minimum Tau.

 Phase wrap-around inside bursts can be detected and unfolded
 in the processing.

 Am I overlooking anything ?


 --
 Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
 p...@freebsd.org | TCP/IP since RFC 956
 FreeBSD committer   | BSD since 4.3-tahoe
 Never attribute to malice what can adequately be explained by incompetence.
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to
 https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-07-29 Thread Bruce Griffiths
If the burst rate is high enough the interpolators will no longer be able to 
maintain phase lock, the question is how high a burst rate is feasible without 
losing lock?
Bruce 


 On Wednesday, 29 July 2015 9:51 AM, Poul-Henning Kamp 
p...@phk.freebsd..dk wrote:
   

 Sorry this is a bit long-ish, but I figure I'm saving time putting
in all the details up front.

The canonical time-nut way to set up a MVAR measurement is to feed
two sources to a HP5370 and measure the time interval between their
zero crossings often enough to resolve any phase ambiguities caused
by frequency differences.

The computer unfolds the phase wrap-arounds, and calculates the
MVAR using the measurement rate, typically 100, 10 or 1 Hz, as the
minimum Tau.

However, the HP5370 has noise-floor in the low picoseconds, which
creates the well known diagonal left bound on what we can measure
this way.

So it is tempting to do this instead:

Every measurement period, we let the HP5370 do a burst of 100
measurements[*] and feed the average to MVAR, and push the diagonal
line an order of magnitude (sqrt(100)) further down.

At its specified rate, the HP5370 will take 1/30th of a second to
do a 100 sample average measurement.

If we are measuring once each second, that's only 3% of the Tau.

No measurement is ever instantaneous, simply because the two zero
crossings are not happening right at the mesurement epoch.

If I measure two 10MHz signals the canonical way, the first zero
crossing could come as late as 100(+epsilon) nanoseconds after the
epoch, and the second as much as 100(+epsilon) nanoseconds later.

An actual point of the measurement doesn't even exist, but picking
with the midpoint we get an average delay of 75ns, worst case 150ns.

That works out to one part in 13 million which is a lot less than 3%,
but certainly not zero as the MVAR formula pressume.

Eyeballing it, 3% is well below the reproducibility I see on MVAR
measurements, and I have therefore waved the method and result
through, without a formal proof.

However, I have very carefully made sure to never show anybody
any of these plots because of the lack of proof.

Thanks to Johns Turbo-5370 we can do burst measurements at much
higher rates than 3000/s, and thus potentially push the diagonal
limit more than a decade to the left, while still doing minimum
violence to the mathematical assumptions under MVAR.

[*] The footnote is this: The HP5370 firwmare does not make triggered
bust averages an easy measurement, but we can change that, in
particular with Johns Turbo-5370.

But before I attempt to do that, I would appreciate if a couple of
the more math-savy time-nuts could ponder the soundness of the
concept.

Apart from the delayed measurement point, I have not been able
to identify any issues.

The frequency spectrum filtered out by the averaging is wy to
the left of our minimum Tau. 

Phase wrap-around inside bursts can be detected and unfolded
in the processing.

Am I overlooking anything ?


-- 
Poul-Henning Kamp      | UNIX since Zilog Zeus 3.20
p...@freebsd.org        | TCP/IP since RFC 956
FreeBSD committer      | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


  
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-07-29 Thread Magnus Danielson

Good morning,

On 07/28/2015 11:51 PM, Poul-Henning Kamp wrote:

Sorry this is a bit long-ish, but I figure I'm saving time putting
in all the details up front.

The canonical time-nut way to set up a MVAR measurement is to feed
two sources to a HP5370 and measure the time interval between their
zero crossings often enough to resolve any phase ambiguities caused
by frequency differences.

The computer unfolds the phase wrap-arounds, and calculates the
MVAR using the measurement rate, typically 100, 10 or 1 Hz, as the
minimum Tau.

However, the HP5370 has noise-floor in the low picoseconds, which
creates the well known diagonal left bound on what we can measure
this way.


One of the papers that I referenced in my long post on PDEV has a nice 
section on that boundary, worth reading.



So it is tempting to do this instead:

Every measurement period, we let the HP5370 do a burst of 100
measurements[*] and feed the average to MVAR, and push the diagonal
line an order of magnitude (sqrt(100)) further down.

At its specified rate, the HP5370 will take 1/30th of a second to
do a 100 sample average measurement.

If we are measuring once each second, that's only 3% of the Tau.

No measurement is ever instantaneous, simply because the two zero
crossings are not happening right at the mesurement epoch.

If I measure two 10MHz signals the canonical way, the first zero
crossing could come as late as 100(+epsilon) nanoseconds after the
epoch, and the second as much as 100(+epsilon) nanoseconds later.

An actual point of the measurement doesn't even exist, but picking
with the midpoint we get an average delay of 75ns, worst case 150ns.

That works out to one part in 13 million which is a lot less than 3%,
but certainly not zero as the MVAR formula pressume.

Eyeballing it, 3% is well below the reproducibility I see on MVAR
measurements, and I have therefore waved the method and result
through, without a formal proof.

However, I have very carefully made sure to never show anybody
any of these plots because of the lack of proof.

Thanks to Johns Turbo-5370 we can do burst measurements at much
higher rates than 3000/s, and thus potentially push the diagonal
limit more than a decade to the left, while still doing minimum
violence to the mathematical assumptions under MVAR.


The Turbo-5370 creates an interesting platform for us to play with, indeed.


[*] The footnote is this: The HP5370 firwmare does not make triggered
bust averages an easy measurement, but we can change that, in
particular with Johns Turbo-5370.

But before I attempt to do that, I would appreciate if a couple of
the more math-savy time-nuts could ponder the soundness of the
concept.


You rang.


Apart from the delayed measurement point, I have not been able
to identify any issues.

The frequency spectrum filtered out by the averaging is wy to
the left of our minimum Tau.

Phase wrap-around inside bursts can be detected and unfolded
in the processing.

Am I overlooking anything ?


You *will* get an improvement in your response, yes, but it will not pan 
out as you would like it to. What you create is, just as with the Lambda 
and Omega counters, a pre-filter mechanism that comes before the ADEV or 
MDEV filtering prior to squaring. The filtering aims at filtering out 
white noise or (systematic) noise behaving like white noise. The 
filtering method was proposed by J.J. Snyder in 1980 for laser 
processing and further in his 1981 paper, published at the same time as 
Allan et al published the MDEV/MVAR paper that is directly inspired by 
Snyders work. Snyder shows that rather than getting a MVAR slope of 
1/tau^2 you get a slope of 1/tau^3 for the ADEV estimation. This slope 
change comes from the changing of the system bandwidth, as the tau 
increases rather than the classical method of having a fixed system 
bandwidth and then process ADEV on that. MDEV/MVAR uses a software 
filter to alter the system bandwidth along with tau, and thus provide a 
fix for the 15 year hole of not being able to separate white and flicker 
phase noise that Dave Allan was so annoyed with.
Anyway, the key point here is that the filter bandwidth changes with 
tau. The form of discussions we have had on Lambda (53132) and Omega 
(CNT-90) counters and the hockey-puck response they create is because 
they have a fixed pre-filtering bandwidth. What you proposes is a 
similar form of weighing mechanism providing a similar type of filtering 
mechanism and a similar fixed pre-filtering bandwidth and thus causing a 
similary type of response. Just as with the Lambda and Gamma counters, 
using MDEV rather than ADEV does not really heal the problem, since for 
longer taus, you observe signals in the pass-band of the fixed 
low-pass-filter and your behavior have gone back to the behavior of the 
ADEV or MDEV of the counter without the filtering mechanism. This is the 
unescapable fact.


To solve this, you will have to build a pre-filtering mechanism that 
allows itself to be 

Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-07-29 Thread Bob Camp
Hi

The “simple” way to see what this does is to plug your sampling approach
into the DSP world and see what sort of filter it creates. Put another way - 
would you 
use this to make a lowpass filter? If so how good a filter would it be?

At least with a hand wave, I would not deliberately use this process as a 
lowpass. Since
it averages, it does have lowpass properties. Until you get to the point that 
you are averaging
a cycle, it’s not got *much* of a lowpass. It’s a discontinuous boxcar, you 
likely will get 
a few zeros in the passband around the width of your sample window. 

Now the question: Does this lowpass (crummy though it is) impact your noise? 
Well sure, to some 
degree it does. The big way it would is if those zeros (which probably aren’t 
as deep as you might think) line
up with a messy spike in your noise. 

So where are the zeros? If you take 100 samples of a 100 ns period waveform, 
your boxcar is
10 us wide. Expect zeros (or at least dips) at 100KHz XN. Move the samples up 
to 1,000 and your
dips are at 10 KHz X N. I’d suggest that most “normal noise” spectra are going 
to look pretty much
the same with and without what’s been taken away.  

Now, take it up to 10% of the total 1 second window and you are at 10Hz. Those 
zeros *are* going
to mess with your spectra and likely will impact your 1 second MVAR or AVAR or 
whatever. 

All that *assumes* you are taking a reading every cycle with the uber-5370. If 
you are doing it old school,
your sampling process will wrap the noise spectrum a bit. That’s going to make 
the analysis a bit more
messy. 

Bob


 On Jul 28, 2015, at 5:51 PM, Poul-Henning Kamp p...@phk.freebsd.dk wrote:
 
 Sorry this is a bit long-ish, but I figure I'm saving time putting
 in all the details up front.
 
 The canonical time-nut way to set up a MVAR measurement is to feed
 two sources to a HP5370 and measure the time interval between their
 zero crossings often enough to resolve any phase ambiguities caused
 by frequency differences.
 
 The computer unfolds the phase wrap-arounds, and calculates the
 MVAR using the measurement rate, typically 100, 10 or 1 Hz, as the
 minimum Tau.
 
 However, the HP5370 has noise-floor in the low picoseconds, which
 creates the well known diagonal left bound on what we can measure
 this way.
 
 So it is tempting to do this instead:
 
 Every measurement period, we let the HP5370 do a burst of 100
 measurements[*] and feed the average to MVAR, and push the diagonal
 line an order of magnitude (sqrt(100)) further down.
 
 At its specified rate, the HP5370 will take 1/30th of a second to
 do a 100 sample average measurement.
 
 If we are measuring once each second, that's only 3% of the Tau.
 
 No measurement is ever instantaneous, simply because the two zero
 crossings are not happening right at the mesurement epoch.
 
 If I measure two 10MHz signals the canonical way, the first zero
 crossing could come as late as 100(+epsilon) nanoseconds after the
 epoch, and the second as much as 100(+epsilon) nanoseconds later.
 
 An actual point of the measurement doesn't even exist, but picking
 with the midpoint we get an average delay of 75ns, worst case 150ns.
 
 That works out to one part in 13 million which is a lot less than 3%,
 but certainly not zero as the MVAR formula pressume.
 
 Eyeballing it, 3% is well below the reproducibility I see on MVAR
 measurements, and I have therefore waved the method and result
 through, without a formal proof.
 
 However, I have very carefully made sure to never show anybody
 any of these plots because of the lack of proof.
 
 Thanks to Johns Turbo-5370 we can do burst measurements at much
 higher rates than 3000/s, and thus potentially push the diagonal
 limit more than a decade to the left, while still doing minimum
 violence to the mathematical assumptions under MVAR.
 
 [*] The footnote is this: The HP5370 firwmare does not make triggered
 bust averages an easy measurement, but we can change that, in
 particular with Johns Turbo-5370.
 
 But before I attempt to do that, I would appreciate if a couple of
 the more math-savy time-nuts could ponder the soundness of the
 concept.
 
 Apart from the delayed measurement point, I have not been able
 to identify any issues.
 
 The frequency spectrum filtered out by the averaging is wy to
 the left of our minimum Tau. 
 
 Phase wrap-around inside bursts can be detected and unfolded
 in the processing.
 
 Am I overlooking anything ?
 
 
 -- 
 Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
 p...@freebsd.org | TCP/IP since RFC 956
 FreeBSD committer   | BSD since 4.3-tahoe
 Never attribute to malice what can adequately be explained by incompetence.
 ___
 time-nuts mailing list -- time-nuts@febo.com
 To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
 and follow the instructions there.

___
time-nuts mailing list -- 

Re: [time-nuts] Modified Allan Deviation and counter averaging

2015-07-29 Thread John Miles

 Apart from the delayed measurement point, I have not been able
 to identify any issues.
 
 The frequency spectrum filtered out by the averaging is wy to
 the left of our minimum Tau.
 
 Phase wrap-around inside bursts can be detected and unfolded
 in the processing.
 
 Am I overlooking anything ?

I think this is basically a valid thing to do, under specific conditions.  It 
works well with the Wavecrest DTS boxes, which can take several thousand 
samples per second and distill them down to a single TI reading.  I've observed 
good agreement between ADEV plots taken with the DTS2070 and direct-digital 
analyzers down to the ~2E-12 level at t=1s. 

The main caveat is that the burst of averaged readings needs to take up a very 
small portion of the tau-zero interval, as you point out, to keep the averaging 
effects out of the visible portion of the plot.  This might not be a very big 
problem with MDEV but it is definitely an issue with ADEV.

The second thing to watch for is even more important: while the host software 
(TimeLab, Stable32, etc.) can handle phase wraps that occur between counter 
readings, the sub-reading averaging algorithm in the counter will not.  Phase 
wraps in the middle of a burst will corrupt the data badly, and I don't see any 
reliable way to detect and unfold them after the fact.  

So if John's firmware can handle intra-burst phase wraps before returning each 
averaged reading to the software, it could be a big win.  Otherwise this 
technique should be confined to cases where two extremely stable sources are 
being compared with no possibility of phase wraps.  It would be a good way to 
keep an eye on the short-term behavior of a pair of Cs or GPS clocks, for 
instance.

-- john, KE5FX
Miles Design LLC


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


[time-nuts] Modified Allan Deviation and counter averaging

2015-07-28 Thread Poul-Henning Kamp
Sorry this is a bit long-ish, but I figure I'm saving time putting
in all the details up front.

The canonical time-nut way to set up a MVAR measurement is to feed
two sources to a HP5370 and measure the time interval between their
zero crossings often enough to resolve any phase ambiguities caused
by frequency differences.

The computer unfolds the phase wrap-arounds, and calculates the
MVAR using the measurement rate, typically 100, 10 or 1 Hz, as the
minimum Tau.

However, the HP5370 has noise-floor in the low picoseconds, which
creates the well known diagonal left bound on what we can measure
this way.

So it is tempting to do this instead:

Every measurement period, we let the HP5370 do a burst of 100
measurements[*] and feed the average to MVAR, and push the diagonal
line an order of magnitude (sqrt(100)) further down.

At its specified rate, the HP5370 will take 1/30th of a second to
do a 100 sample average measurement.

If we are measuring once each second, that's only 3% of the Tau.

No measurement is ever instantaneous, simply because the two zero
crossings are not happening right at the mesurement epoch.

If I measure two 10MHz signals the canonical way, the first zero
crossing could come as late as 100(+epsilon) nanoseconds after the
epoch, and the second as much as 100(+epsilon) nanoseconds later.

An actual point of the measurement doesn't even exist, but picking
with the midpoint we get an average delay of 75ns, worst case 150ns.

That works out to one part in 13 million which is a lot less than 3%,
but certainly not zero as the MVAR formula pressume.

Eyeballing it, 3% is well below the reproducibility I see on MVAR
measurements, and I have therefore waved the method and result
through, without a formal proof.

However, I have very carefully made sure to never show anybody
any of these plots because of the lack of proof.

Thanks to Johns Turbo-5370 we can do burst measurements at much
higher rates than 3000/s, and thus potentially push the diagonal
limit more than a decade to the left, while still doing minimum
violence to the mathematical assumptions under MVAR.

[*] The footnote is this: The HP5370 firwmare does not make triggered
bust averages an easy measurement, but we can change that, in
particular with Johns Turbo-5370.

But before I attempt to do that, I would appreciate if a couple of
the more math-savy time-nuts could ponder the soundness of the
concept.

Apart from the delayed measurement point, I have not been able
to identify any issues.

The frequency spectrum filtered out by the averaging is wy to
the left of our minimum Tau. 

Phase wrap-around inside bursts can be detected and unfolded
in the processing.

Am I overlooking anything ?


-- 
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
p...@freebsd.org | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.
___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.