In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Leandro Pfleger de Aguiar) wrote:

> "The time of a client relative to its server can be expressed
> T(t) =3D T(t0) + R(t - t0) + 1/2 D(t - t0)2, where t is the current =

I assume you meant D(t - t0)^2.

T, R, and D, here, are the unknown true values of these parameters, not
necessarily any measured value.  In particular, T has an uncertainty of
half the root delay plus R and D terms resulting from the difference
between t0 and the time the reference clock was read.

If the signal processing works well, the local clock should be a more
accurate estimate of T=0 than any individual offset measurement.  My gut
feeling is that this isn't as true as it might be, possibly because using
relatively linear filters, whilst it would be necessary for analogue
electronics, is an oversimplification for a computer algorithm.

Pointers to this conclusion include the number of people who assume that
the reported offset is the true error - that suggests that people feel
they can predict the true error from looking at these offsets and they
are not randomly distributed, or the result of varying link asymmetry.
If this were actually reliably the case, a good algorithm would wipe it
out in one fell swoop.  There are two reasons this doesn't happen:

- one probably cannot really put much confidence in it being the true
  error (and when things stabilise it probably significantly exceeds
  the true error);

- the, only slightly complicated, phase locked loop implementation isn't
  capable of responding rapidly enough without losing too much frequency
  stability.

NB all these error bound calculations assume that every server in the 
chain is well behaved and isn't, for example, losing interrupts, or
hitting the control loop end stop.

> expression, while others, including NTP, estimate the first two terms."

Whilst NTP estimates R, it doesn't estimate T, but rather UTC + T, i.e.
the estimate of the T value is in the local clock value, not the offset
value relative to the immediate upstream server.

> Based on it, should i believe that offset from "ntpq -c rl 0" is enough =

No.  It can be in error by almost a second.  I really need to spend some
time researching this to answer it properly, but the worst possible error for
a functioning NTP system is about one (half?) second, because that's when it
it considers the root distance too high.  As root delay decreases, the
error due to the upstream R and D terms will eventually become significant.
Especially whilst root delay is still the dominant term, an analysis of the
actual network environment may well allow one to constrain the error bounds
much more tightly (e.g. a lightly loaded link over modems might have a 
round trip time of 300ms but an error of 2ms, so the error is much better
than would be predicted by the 150ms half round trip time.  The same
configuration except for using ADSL may have a much smaller round trip time,
but a larger error, because of the asymmetry).

Note also that because the NTP control loop is underdamped, the worst case
error might actually be worse than simply removing the whole measured
offset every time, even though the probable error will be much better.

[ This feels like it belonged on an an existing thread, but was posted as
the start of a new thread. ]

_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.isc.org/mailman/listinfo/questions

Reply via email to