In article <[EMAIL PROTECTED]>, jay <[EMAIL PROTECTED]> wrote: > I see 'offset' values varies from around -100 ~ -5.
They should vary both sides of zero. If they don't you either have a clock that exceeds the ~500ppm maximum systematic frequency error (or as neither machine has true time, maybe one is +250 and the other -250), or you are losing clock interrupts (probably on the server, in this case). > If offset values are nonzero, then it means the time between two > computers are not the same at the last time of the synch, right? No. It means that there is a difference between the estimated value of the local clock value and the estimated value of the server's; both times are subject to measurement error. Once things have stabilised, and ignoring any systematic errors (e.g. asymmetric network delays), the local clock setting will tend to average out these measurement errors and have a variation from true time that is much less than the instantaneously measured offset. > Let's assume that I saved time sequences from both computers. > Can I say that the server time 'to' is really same as the client time > 'to-4.095'. (From the above case... assuming that I saved offset No. > periodically) > Or, should I consider delay as well??? Delay is one of the terms used in determining the error bounds. > I found 'k9' on the web which makes sure that the clock on PC is > synchronised. It uses the poor relation SNTP protocol. > If I install k9, then it fixes the error(offset) real-time??? Or, am I > completely wrong? It applies the full measurement error on the sample (i.e. it adds the measurment error for the server to the time setting error for the client), and allows the clocks to wander off between samples, at the same rate as if it hadn't been there. Full NTP, on the other hand, smoothes the measurement errors to get a much better idea of the true time and also estimates and corrects for the drift between measurements, so the time is generally better at all times. Most newcomers seem to think that offset is the figure of merit when the real figure of merit is only partially contained in the information that you quote above. The jitter, which is not good for such a low delay, is part of the true figure of merit. The other part is error bounds set by the delay and the total time since a real reference clock was read. If you use the wrong figure of merit you can end up using a bad solution because it minimises the wrong parameter. You don't, of course, have a real time here, so the reference clock is the software clock on the server machine. Also note that laptops tend to make bad timekeepers, as their CPU clocks continually vary in frequency as a result of power management. _______________________________________________ questions mailing list [email protected] https://lists.ntp.isc.org/mailman/listinfo/questions
