Hi Poul-Henning,

On 08/01/2015 10:32 PM, Poul-Henning Kamp wrote:
--------
In message <[email protected]>, Bob Camp writes:

The approach you are using is still a discrete time sampling
approach. As such it does not directly violate the data requirements
for ADEV or MADEV.  As long as the sample burst is much shorter
than the Tau you are after, this will be true. If the samples cover < 1%
of the Tau, it is very hard to demonstrate a noise spectrum that
this process messes up.

So this is where it gets interesting, because I suspect that your
1% "lets play it safe" threshold is overly pessimistic.

I agree that there are other error processes than white PM which
would get messed up by this and that general low-pass filtering
would be much more suspect.

But what bothers me is that as far as I can tell from real-life
measurements, as long as the dominant noise process is white PM,
even 99% Tau averaging gives me the right result.

I have tried to find a way to plug this into the MVAR definition
based on phase samples (Wikipedia's first formula under "Definition")
and as far as I can tell, it comes out the same in the end, provided
I assume only white PM noise.

I put that formula there, and I think Dave trimmed the text a little.

For true white PM *random* noise you can move your phase samples around, but you gain nothing by bursting them. For any other form of random noise and for the systematic noise, you alter the total filtering behavior as compared to AVAR or MVAR, and it is through altering the frequency behavior rhat biases in values is born. MVAR itself has biases compared to AVAR for all noises due to its filtering behavior.

The bursting that you propose is similar to the uneven spreading of samples you have in the dead-time sampling, where the time between the start-samples of your frequency measures is T, but the time between the start and stop samples of your frequency measures is tau. This creates a different coloring of the spectrum than if the stop sample of the previous frequency measure also is the start sample of the next frequency measure. This coloring then creates a bias-ing depending on the frequency spectrum of the noise (systematic or random), so you need to correct it with the appropriate biasing function. See the Allan deviation wikipedia article section of biasing functions and do read the original Dave Allan Feb 1966 article.

For doing what you propose, you will have to define the time properities of the burst, so that woould need to have the time between the bursts (tau) and time between burst samples (alpha). You would also need to define the number of burst samples (O). You can define a bias function through analysis. However, you can sketch the behavior for various noises. For white random phase noise, there is no correlation between phase samples, which also makes the time between them un-interesting, so we can re-arrange our sampling for that noise as we seem fit. For other noises, you will create a coloring and I predict that the number of averaged samples O will be the filtering effect, but the time between samples should not be important. For systematic noise such as the quantization noise, you will again interact, and that with a filtering effect.

At some times the filtering effect is useful, see MVAR and PVAR, but for many it becomes an uninteresting effect.

But I have not found any references to this "optimization" anywhere
and either I'm doing something wrong, or I'm doing something else
wrong.

I'd like to know which it is :-)


You're doing it wrong. :)

PS. At music festival, so quality references is at home.

Cheers.
Magnus
_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.

Reply via email to