Bruce,
Straigt arrow. That's why I have been hollering about the Allan
intercept. The characteristics of the heavy-tail distribution in actual
NTP measurements are in my book. The bottom line is that in most
real-world NTP configurations for WANs the real enemy is not phase
noise, but frequency noise with H parameters considerably greater than 0.5.
Dave
Bruce wrote:
> My understanding is that least squares is optimal only when the
> residuals are white. For measurements of atomic frequency standards
> such as Rubidium or Cesium, the noise process is dominated by white
> frequency noise, and in this case linear regression yields the optimal
> estimate of frequency. A little snooping around on the NIST web site
> will provide the relevant backup info.
>
> For so many other precision timing and frequency applications, the noise
> processes are decidedly un-white. David Allan developed his famous
> two-sample variance to handle these divergent, non-stationary noise
> processes. For instance, quartz oscillators are dominated by flicker
> frequency noise for averaging times greater than about 100 milliseconds,
> and eventually turn to random walk at longer averaging times. Selective
> Availability of GPS (when it was in effect) was a white phase noise
> process that modulated the time transfer for un-keyed users. The
> statistics of network time transfer via ntp are undoubtedly divergent,
> but I have not seen any data that showed it to be white frequency noise
> dominant.
>
> So, it is not clear that linear regression is optimal for estimating the
> frequency via ntp, unless someone has determined the statistics to be
> white frequency. I personally have not performed the measurements to
> make that determination, but it would not surprise me if Judah Levine has.
>
> Bruce
>
> Unruh wrote:
>
>> "David L. Mills" <[EMAIL PROTECTED]> writes:
>>
>>> Bill,
>>
>>
>>> Read it again. Judah takes multiple samples to reduce the phase
>>> noise, not to improve the frequency estimation.
>>
>>
>> Dave: The frequency estimate is done by subtracting two phase
>> determinations. Thus the phase noise enters the frequency
>> determination. By
>> reducing the phase noise you reduce the frequency noise as well. I think
>> you need to read it again, but us just telling the other to read properly
>> will not help.
>> The frequency estimate is obtained in NTP and in his procedure by making
>> phase measurements. f_i= (y_i-y_{i-1})/T
>> If y_i=z_i+e_i where z_i is the "true" time and e_i is a gaussian random
>> variable, then delta f_i= sqrt( <e_i^2>+<e_{i-1}^2>)/T
>> By reducing <e_i^2> you reduce delta f_i. And as you point out, you can
>> reduce <e_i^2> by making a bunch of measurements. Those measurements
>> can be
>> all done at the end points or spread over the time interval T. The latter
>> is not quite as effective in reducing delta f_i since many of the
>> measurements do not have as long a "lever arm" as if they were all at the
>> endpoints, and that is why uniform sampling is about sqrt(3) worse than
>> clustering at the end points. But in either case, the more
>> measurements you
>> make the more you reduce the uncertainty in the frequency estimate.
>> Anyway, at this point everyone else has enough information to make up
>> their
>> own mind.
>>
>>
>>> Dave
>>
>>
>>> Unruh wrote:
>>
>>
>>>> You must have read a different paper than that one. I found it
>>>> (through our
>>>> library) and it says that if you have n measurements in a time
>>>> period T,
>>>> the best strategy is to take n/2 measurements at the beginning of
>>>> the time
>>>> and n/2 at the end to minimize the effect of the white noise phase
>>>> error on the
>>>> frequency estimate. That is perfectly true, and gives an error which
>>>> goes
>>>> as sqrt(4/n)delta/T rather than sqrt(12/n)(delta/T) for equally spaced
>>>> measurements (assuming large n) T is the total time interval and
>>>> delta is the std dev of each phase measurement . But it certainly
>>>> does NOT say that if you have n
>>>> measurements, just use the first and last one to estimate the slope.
>>>> If you have n measurements, the best estimate of the slope is to do
>>>> a least
>>>> squares fit. If they are equally spaced, the center third do not
>>>> help much
>>>> (nor do they hinder), but a least squares fit is always the best
>>>> thing to
>>>> do.
>>>>
>>>> "David L. Mills" <[EMAIL PROTECTED]> writes:
>>>>
>>>>
>>>>> Bill,
>>>>
>>>>
>>>>> NIST doesn't agree with you. Only the first and last are truly
>>>>> significant. Reference: Levine, J. Time synchronization over the
>>>>> Internet using an adaptive frequency locked loop. IEEE Trans. UFFC,
>>>>> 46(4), 888-896, 1999.
>>>>
>>>>
>>>>> Dave
>>>>
>>>>
>>>>> Unruh wrote:
>>>>
>>>>
>>>>>> "David L. Mills" <[EMAIL PROTECTED]> writes:
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Bill,
>>>>>>
>>>>>>
>>>>>>> Ahem. The first point I made was that least-squares doesn't help
>>>>>>> the frequency estimate. The next point you made is that
>>>>>>> least-squares improves the phase estimate. The last point you
>>>>>>> made is that phase noise
>>>>>>
>>>>>>
>>>>>> No. The point I tried to make was the least squares improved the
>>>>>> FREQUENCY estimate by sqrt(n/6) for large n, where n is the number
>>>>>> of points (assumed
>>>>>> equally spaced) at which the phase is measured. I am sorry that
>>>>>> the way I
>>>>>> phrased it could have been misunderstood.
>>>>>>
>>>>>>
>>>>>> The phase is ALSO improved proportional to sqrt(n)
>>>>>> . This assumes uncorrelated phase errors dominate the error budget.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> is not important. Our points have been made and further
>>>>>>> discussion would be boring.
>>>>>>
>>>>>>
>>>>>> Except you misunderstood my point. It may still be boring to you.
>>>>>>
>>>>>>
>>>>>>
>>>>>>> Dave
>>>>>>
>>>>>>
>>>>>>> Unruh wrote:
>>>>>>>
>>>>>>>
>>>>>>>> "David L. Mills" <[EMAIL PROTECTED]> writes:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> Bill,
>>>>>>>>
>>>>>>>>
>>>>>>>>> If you need only the frequency, least-squares doesn't help a
>>>>>>>>> lot; all you need are the first and last points during the
>>>>>>>>> measurement interval.
>>>>>>>>
>>>>>>>>
>>>>>>>> Well, no. If you have random phase noise, a least squares fit
>>>>>>>> will improve
>>>>>>>> the above estimate by roughly sqrt(n/4) where n is the number of
>>>>>>>> points.
>>>>>>>> That can be significant. It is certainly true that the end
>>>>>>>> points have the
>>>>>>>> most weight ( which is why the factor of 1/4). Ie, if you have
>>>>>>>> 64 points,
>>>>>>>> you are better by about a factor of 4 which is not insignificant.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> The NIST LOCKCLOCK and nptd FLL disciplines compute the
>>>>>>>>> frequency directly and exponentially average successive
>>>>>>>>> intervals. The NTP discipline is in fact a hybrid PLL/FLL where
>>>>>>>>> the PLL dominates below the Allan intercept and FLL above it
>>>>>>>>> and also when started without a frequency file. The trick is to
>>>>>>>>> separate the phase component from the frequency component,
>>>>>>>>> which requires some delicate computations. This allows the
>>>>>>>>> frequency to be accurately computed as above, yet allows a
>>>>>>>>> phase correction during the measurement interval.
>>>>>>>>
>>>>>>>>
>>>>>>>> He of course is not interested in phase corrections.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> Dave
>>>>>>>>
>>>>>>>>
>>>>>>>>> Unruh wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> David Woolley <[EMAIL PROTECTED]> writes:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> Unruh wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>> I do not understand this. You seem to be measuring the
>>>>>>>>>>>> offsets, not the
>>>>>>>>>>>> frequencies. The offset is irrelevant. What you want to do
>>>>>>>>>>>> is to measure
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> Measuring phase error to control frequency is pretty much THE
>>>>>>>>>>> standard way of doing it in modern electronics. It's called
>>>>>>>>>>> a phase locked loop
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Sure. In the case of ntp you want to have zero phase error.
>>>>>>>>>> ntp reduces the
>>>>>>>>>> phase error slowly by changing the frequency. This has the
>>>>>>>>>> advantage that
>>>>>>>>>> the frequency error also gets reduced (slowly). He wants to
>>>>>>>>>> reduce the
>>>>>>>>>> frequency error only. He does not give a damn about the phase
>>>>>>>>>> error
>>>>>>>>>> apparently. Thus you do NOT want to reduce the frequecy error
>>>>>>>>>> by attacking
>>>>>>>>>> the phase error. That is a slow way of doing it. You want to
>>>>>>>>>> estimate the
>>>>>>>>>> frequency error directly. Now in his case he is doing so by
>>>>>>>>>> measuring the
>>>>>>>>>> phase, so you need at least two phase measurements to estimate
>>>>>>>>>> the
>>>>>>>>>> frequency error. But you do NOT want to reduce the frequency
>>>>>>>>>> error by
>>>>>>>>>> reducing the phase error-- far too slow.
>>>>>>>>>> One way of reducing the frequency error is to use the ntp
>>>>>>>>>> procedure but
>>>>>>>>>> applied to the frequency. But you must feed in an estimate of
>>>>>>>>>> the frequecy
>>>>>>>>>> error. Anothr way is the chrony technique. -- collect phase
>>>>>>>>>> points, do a
>>>>>>>>>> least squares fit to find the frequency, and then use that
>>>>>>>>>> information to
>>>>>>>>>> drive the frequecy to zero. To reuse past data, also correct
>>>>>>>>>> the prior
>>>>>>>>>> phase measurements by the change in frequency.
>>>>>>>>>> (t_{i-j}-=(t_{i}-t_{i-j}) df
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> (PLL) and it is getting difficult to find any piece of
>>>>>>>>>>> electrnics that doesn't include one these days. E.g. the
>>>>>>>>>>> typical digitally tuned radio
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> A PLL is a dirt simply thing to impliment electronically. A
>>>>>>>>>> few resistors
>>>>>>>>>> and capacitors. It however is a very simply Markovian process.
>>>>>>>>>> There is far
>>>>>>>>>> more information in the data than that, and digititally it is
>>>>>>>>>> easy to
>>>>>>>>>> impliment far more complex feedback loops than that.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> or TV has a crystal oscillator, which is divided down to the
>>>>>>>>>>> channel spacing or a sub-multiple, and a configurable divider
>>>>>>>>>>> on the local oscillator divides that down to the same
>>>>>>>>>>> frequency. The resulting two signals are then phase locked,
>>>>>>>>>>> by measuring the phase error on each cycle, low pass
>>>>>>>>>>> filtering it, and using it to control the local oscillator
>>>>>>>>>>> frequency, resulting in their matching in frequency, and
>>>>>>>>>>> having some constant phase error.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>> the offset twice, and ask if the difference is constant or
>>>>>>>>>>>> not. Ie, th
>>>>>>>>>>>> eoffset does not correspond to being off by 5Hz.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> ntpd only uses this method on a cold start, to get the
>>>>>>>>>>> initial coarse calibration. Typical electronic
>>>>>>>>>>> implementations don't use it at all, but either do a
>>>>>>>>>>> frequency sweep or simply open up the low pass filter, to get
>>>>>>>>>>> initial lock.
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> And? You are claiming that that is efficient or easy? I would
>>>>>>>>>> claim the
>>>>>>>>>> latter. And his requirements are NOT ntp's requirements. He
>>>>>>>>>> does not care
>>>>>>>>>> about the phase errors. He is onlyconcerned about the
>>>>>>>>>> frequency errors.
>>>>>>>>>> driving the frequency errors to zero by driving the phase
>>>>>>>>>> errors to zero is
>>>>>>>>>> not a very efficient technique-- unless of course you want the
>>>>>>>>>> phase errors
>>>>>>>>>> to be zero( as ntp does, and he does not).
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.org/mailman/listinfo/questions