Good morning,

On 07/28/2015 11:51 PM, Poul-Henning Kamp wrote:
Sorry this is a bit long-ish, but I figure I'm saving time putting
in all the details up front.

The canonical time-nut way to set up a MVAR measurement is to feed
two sources to a HP5370 and measure the time interval between their
zero crossings often enough to resolve any phase ambiguities caused
by frequency differences.

The computer unfolds the phase wrap-arounds, and calculates the
MVAR using the measurement rate, typically 100, 10 or 1 Hz, as the
minimum Tau.

However, the HP5370 has noise-floor in the low picoseconds, which
creates the well known diagonal left bound on what we can measure
this way.

One of the papers that I referenced in my long post on PDEV has a nice section on that boundary, worth reading.

So it is tempting to do this instead:

Every measurement period, we let the HP5370 do a burst of 100
measurements[*] and feed the average to MVAR, and push the diagonal
line an order of magnitude (sqrt(100)) further down.

At its specified rate, the HP5370 will take 1/30th of a second to
do a 100 sample average measurement.

If we are measuring once each second, that's only 3% of the Tau.

No measurement is ever instantaneous, simply because the two zero
crossings are not happening right at the mesurement epoch.

If I measure two 10MHz signals the canonical way, the first zero
crossing could come as late as 100(+epsilon) nanoseconds after the
epoch, and the second as much as 100(+epsilon) nanoseconds later.

An actual point of the measurement doesn't even exist, but picking
with the midpoint we get an average delay of 75ns, worst case 150ns.

That works out to one part in 13 million which is a lot less than 3%,
but certainly not zero as the MVAR formula pressume.

Eyeballing it, 3% is well below the reproducibility I see on MVAR
measurements, and I have therefore waved the method and result
through, without a formal proof.

However, I have very carefully made sure to never show anybody
any of these plots because of the lack of proof.

Thanks to Johns Turbo-5370 we can do burst measurements at much
higher rates than 3000/s, and thus potentially push the diagonal
limit more than a decade to the left, while still doing minimum
violence to the mathematical assumptions under MVAR.

The Turbo-5370 creates an interesting platform for us to play with, indeed.

[*] The footnote is this: The HP5370 firwmare does not make triggered
bust averages an easy measurement, but we can change that, in
particular with Johns Turbo-5370.

But before I attempt to do that, I would appreciate if a couple of
the more math-savy time-nuts could ponder the soundness of the
concept.

You rang.

Apart from the delayed measurement point, I have not been able
to identify any issues.

The frequency spectrum filtered out by the averaging is waaaay to
the left of our minimum Tau.

Phase wrap-around inside bursts can be detected and unfolded
in the processing.

Am I overlooking anything ?

You *will* get an improvement in your response, yes, but it will not pan out as you would like it to. What you create is, just as with the Lambda and Omega counters, a pre-filter mechanism that comes before the ADEV or MDEV filtering prior to squaring. The filtering aims at filtering out white noise or (systematic) noise behaving like white noise. The filtering method was proposed by J.J. Snyder in 1980 for laser processing and further in his 1981 paper, published at the same time as Allan et al published the MDEV/MVAR paper that is directly inspired by Snyders work. Snyder shows that rather than getting a MVAR slope of 1/tau^2 you get a slope of 1/tau^3 for the ADEV estimation. This slope change comes from the changing of the system bandwidth, as the tau increases rather than the classical method of having a fixed system bandwidth and then process ADEV on that. MDEV/MVAR uses a "software filter" to alter the system bandwidth along with tau, and thus provide a fix for the 15 year hole of not being able to separate white and flicker phase noise that Dave Allan was so annoyed with. Anyway, the key point here is that the filter bandwidth changes with tau. The form of discussions we have had on Lambda (53132) and Omega (CNT-90) counters and the hockey-puck response they create is because they have a fixed pre-filtering bandwidth. What you proposes is a similar form of weighing mechanism providing a similar type of filtering mechanism and a similar fixed pre-filtering bandwidth and thus causing a similary type of response. Just as with the Lambda and Gamma counters, using MDEV rather than ADEV does not really heal the problem, since for longer taus, you observe signals in the pass-band of the fixed low-pass-filter and your behavior have gone back to the behavior of the ADEV or MDEV of the counter without the filtering mechanism. This is the unescapable fact.

To solve this, you will have to build a pre-filtering mechanism that allows itself to be extended to any multiple of tau as your processing continues. You can do this, but you need to learn to respect the rules of the game. You can do this on the Turbo 5370 too, it's actually a great platform for it. One way is to do a Lambda counter collection, just as in the J.J.Snyder paper, which is to just accumulate phase measures into blocks of m0 samples, being m0*tau0 long. These blocks can later be combined to create longer blocks of any multiple of m0*tau0, and preserving the filtering behavior as the m1 multiple forms m1*m0*tau0. If you want to do the same for a Omega counter, you will also have to collect another the sum of a weigted series. The correct extension of these blocks form the correct MDEV/MVAR estimator, and the correct extension of blocks for a Omega style response can form the correct PDEV/PVAR estimator.

So, rather than doing the burst-processing that you propose, I propose that you create a series of say 1000 samples per block, evenly dispersed out with the tau0, and then combine them properly in post-processing to form the MDEV/MVAR estimator. This is fairly straight-forward. Attempting the PDEV/PVAR will give you 3/4 of the white noise (that is -1.25 dB) but the same slope, and the ways to build and combine the values for multiple taus have not yet been published in a peer-review form.

Now, going back to the white-noise slope, it consists of two things, actual sum of white noise mainly hitting the limiter stage, converting it into white noise phase modulation, and also the counter resolution and thus time-quantization as being played by the phases of the two clocks being compared. As you take N number of samples and average over a period of time, for the white noise part the distribution over time does not care, as the white phase noise does by definition not correlate to each other, so for that filtering, bursting it or spreading it evenly does not care. However, for the clock signals interacting with the counter resolution (avoiding all higher order terms here), it is a systematic noise and not a random noise, so the way the sample points is distributed does interact with the systematic noise. Essentially, our increased average capability for resolution comes from us essentially finding out how many cycles it takes for counter phase to jump between two nearby states. There exists many Nyquist wrappings of this, but that's essentially what we do to beat this noise. It's similar to the delay-line of the 5371/5372 or for that mater the beat note of the 5370. The way you distribute your sampling does care, as the burst one will only provide this service to beat notes having a length being a multiple to the length of the beat, so for it to do that for lower frequencies, you want it as long as possible, which means you want it evenly distributed over the block.

This is a bitch to understand, as the ADEV/MDEV/PDEV stuff needs to be understood both in time and frequency plane. Even very well established scientists have come clean about them messing it up. Many just look on slopes and think it is random noise, rather than separating random noise and systematic noise behaviors. Averaging methods can sometimes kill those two birds with one stone, but you need to understand how it does it for both cases to apply the stone properly. What I propose is a sane way of applying the stone, and we can trace it back to peer-reviewed papers even and feel confident about the results.

This topic is in fact so messed up, I think some of what I described is not seen properly expressed in peer-reviewed papers. I do hope you did learn something from this little comment.

Cheers,
Magnus
_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to