In article <[EMAIL PROTECTED]>, I wrote: > In article <[EMAIL PROTECTED]>, > [EMAIL PROTECTED] wrote:
> It is not the instantaneous error, but rather a statistical error bound to > some level of confidence; on most occasions, So far so good. > UTC will be somewhere between > local clock time - estimated error and local clock time + estimated error, After checking the code: Maximum error is based on UTC time and UTC should be within those bounds subject to caveats regarding reference clocks with systematic errors, lost interrupts, and application programs receiving a worse precision than ntpd (i.e. Windows). It's probably the only safe metric to give to a manager with a financial accountancy background, but it is of rather limited value to an engineer. Estimated error is weaker than I thought. It doesn't include either the current offset or any systematic errors at all. It's, to a first approximation, a rolling average of the variability in the delay measurement, both relative to a single peer and across all peers. It's a bit like a standard deviation, but relative to the current candidate measurements, not to the mean. It's related to repeatability rather than absolute error and it relates to measurements, rather than the actual time (NTP will increase polling intervals to try and get the uncertainty in the time to balance the uncertainty in the measurements, so will tend to a point where this also becomes a measure of the repeatibility in the local clock time, assuming all upstream servers are well behaved). With faster polling, there may be times when the local clock is repeatable to rather better than the estimated error (but the frequency stability will be less than theoretically achievable). (An interesting point here is that it seems to me that NTP is actually striving for the best frequency stability, rather than the best time stability.) If the mean offset is signicantly smaller than estimated error, estimated error ought to be a good measure of the error bound due to all but systematic offsets, but if the control loop has been disturbed anywhere on the chain (I think that estimated error soft starts to allow for intitial transients) the error may be signficantly greater. ntpd can't tell whether the disturbance is in its local clock or in its sources, so doesn't know whether to adjust or wait it out (however my gut feeling is that there ought to be heuristics that are better at this than the current code). If you really want an accurate measure of error, you need to provide alternative hardware that is known to be more accurate. But in that case, the only reason for not using that hardware to drive NTP is that the errors introduced by the computer itself dominate the error (by modifying the OS kernel to output its idea of the time directly to external hardware, it would, in principal, be possible to measure the OS error, but in that case, it might be better to reverse the data flow and have the applications directly read that hardware). _______________________________________________ questions mailing list [email protected] https://lists.ntp.isc.org/mailman/listinfo/questions
