Unruh wrote:
> "David L. Mills" <[EMAIL PROTECTED]> writes:

>> You might not have noticed a couple of crucial issues in the clock 
>> filter code.
> 
> I did notice them all. Thus my caveate. However throwing away 80% of the
> precious data you have seems excessive.

But it isn't really. It would be if there were no correlation between
the delay and the error, but there is a correlation. If the sampling
were completely random, then you would want to use all of the samples
to determine the correct offset, by averaging or some such method.
But since the error in the sample is correlated to the size of the
delay, using samples with greater delay and thus greater error just
increases the error of the final result, not lessens it. Since the
clocks involved also slew between samples, we want to use the newest
sample with the smallest delay.

Brian Utterback

_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.org/mailman/listinfo/questions

Reply via email to