Re: [time-nuts] ADEV drop

2012-08-12 Thread Magnus Danielson

On 08/12/2012 05:02 AM, WarrenS wrote:


The basic problem is that one can not meet Allan's requirement
of the integral of the instantaneous frequencies over tau0 time
and Nyquist-Shannon sampling theorem requirement if taking just
one raw phase sample per displayed ADEV tau0.
The two requirements are then mutually exclusive.
This is especially true when that sample is coming from a
DTMD zero crossing detector.

The way I get ADEV tau answers that do not droop at all near tau0,
and that are independent of the displayed tau0, the oversample rate
and the NEQ.BW filter (if BW  2* tau0) without having to throw
away the low tau answers or save data files with more than tau0
number of samples, is by oversampling the raw data and then
reducing it in an appropriate way before saving it as tau0.
With high speed oversampling it is also very simple to avoid
any aliasing problems.

Using an external DC coupled sound card, oversampling at 48KHz
for any tau0, both the TPLL2.0 and the XOR-LPD give non-drooping
tau0 answers that are not a function of the oversample rate or the tau0
reduction rate.
That is, get the same ADEV tau 1sec answer if the tau0 is 1KHz, 1Hz or
anywhere in-between.


The problem with counters is that they often sample to infrequently. An 
audio rate sampling and proper conversion of the DM beat note will just 
as your TPLL2.0 provide sufficient sample rate. If you where trying to 
make ADEV plots all the way to your sample-rate, you would get droop 
too, but since you don't you don't experience it. That is because you do 
keep sufficient number of samples on the first displayed tau.


The same behaviour has been seen in the TimePod for instance.

So, it just illustrates how you need to handle your data, not that the 
method itself is better.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


[time-nuts] ADEV drop

2012-08-11 Thread Magnus Danielson

Fellow time-nuts,

As David insisted that I get and then read the ITU Handbook Selection 
and Use of Precise Frequency and Time Systems (1997) and in particular 
Chapter 3 I took the time to get it and start reading it. In there I 
found clause 3.3.2.4.4 Truncation effects, which addresses this issue, 
which also aligns up with my own writing on Allan Deviation, and the 
Measurement bandwidth limit (I will have to update that one).


The key point is that the main lobe of the kernel function (the way that 
the sin(pi*tau*f)^4/x^n look), will be affected by the system bandwidth 
and values will be not matching up to the brick-wall analysis of the 
traditional system. The result will be that the ADEV measure will be 
lower than it should. This situation was analysed by Bernier in 1987 as 
part of analysing the modified Allan deviation, which has a software 
bandwidth filter in the form of the n*tau_0 average filter.


So, the first low n values is even expected to give systematic low 
values, which is the reason for the ITU-T to put minimum requirements on 
the tau_0 to lowest tau to ensure that repeatability is achieved.


This is also the same effect that Sam Steiner mentioned in his 
presentation during this years NIST seminars. Sam also went on to 
discuss the effect of aliasing, which helps to bring even more false 
values in that region.


Conclusion: Just don't look all that hard on the lower tau values, as 
they can be systematically off. Make sure that you have a tau_0 well 
below the taus you are interested in to ensure that your values is 
reasonably valid.


Cheers,
Magnus

___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.


Re: [time-nuts] ADEV drop

2012-08-11 Thread WarrenS


The basic problem is that one can not meet Allan's requirement
of the integral of the instantaneous frequencies over tau0 time
and Nyquist-Shannon sampling theorem requirement if taking just
one raw phase sample per displayed ADEV tau0.
The two requirements are then mutually exclusive.
This is especially true when that sample is coming from a
DTMD zero crossing detector.

The way I get ADEV tau answers that do not droop at all near tau0,
and that are independent of the displayed tau0, the oversample rate
and the NEQ.BW filter (if BW  2* tau0) without having to throw
away the low tau answers or save data files with more than tau0
number of samples, is by oversampling the raw data and then
reducing it in an appropriate way before saving it as tau0.
With high speed oversampling it is also very simple to avoid
any aliasing problems.

Using an external  DC coupled sound card, oversampling at 48KHz
for any tau0, both the TPLL2.0 and the XOR-LPD give non-drooping
tau0 answers that are not a function of the oversample rate or the tau0
reduction rate.
That is, get the same ADEV tau 1sec answer if the tau0 is 1KHz, 1Hz or 
anywhere in-between.


ws



Fellow time-nuts,

As David insisted that I get and then read the ITU Handbook Selection
and Use of Precise Frequency and Time Systems (1997) and in particular
Chapter 3 I took the time to get it and start reading it. In there I
found clause 3.3.2.4.4 Truncation effects, which addresses this issue,
which also aligns up with my own writing on Allan Deviation, and the
Measurement bandwidth limit (I will have to update that one).

The key point is that the main lobe of the kernel function (the way that
the sin(pi*tau*f)^4/x^n look), will be affected by the system bandwidth
and values will be not matching up to the brick-wall analysis of the
traditional system. The result will be that the ADEV measure will be
lower than it should. This situation was analysed by Bernier in 1987 as
part of analysing the modified Allan deviation, which has a software
bandwidth filter in the form of the n*tau_0 average filter.

So, the first low n values is even expected to give systematic low
values, which is the reason for the ITU-T to put minimum requirements on
the tau_0 to lowest tau to ensure that repeatability is achieved.

This is also the same effect that Sam Steiner mentioned in his
presentation during this years NIST seminars. Sam also went on to
discuss the effect of aliasing, which helps to bring even more false
values in that region.

Conclusion: Just don't look all that hard on the lower tau values, as
they can be systematically off. Make sure that you have a tau_0 well
below the taus you are interested in to ensure that your values is
reasonably valid.

Cheers,
Magnus


___
time-nuts mailing list -- time-nuts@febo.com
To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
and follow the instructions there.