Guys, Is there a misunderstanding here? The time available to all system components ultimately comes from an atomic variable maintained by the kernel, usually in seconds and nanoseconds. The time and frequency of this variable is disciplined (sic) by NTP, either via another server or by a local reference clock. With respect to NTP, the precision is defined as the time to read the clock, usually the gettimeofday() or similar system call. In modern systems the precision is typically less than one microsecond. Note that the precision of the NTP timestamp itself is 232 picoseconds, or about 0.2 nanosecond.
You ask about the accuracy of an NTP time server. This depends on a number of factors, machine room temperature variations, network delay variations, etc. With a GPS receiver and pulse-per-second (PPS) signal, typical accuracies are in the low tens of microseconds. Clients sharing a common Ethernet with such a server typically show errors less than 100 microseconds. Conventional wisdom suggests better than 1 ms for campus networks, better than 10 ms with fast Internet connections and better than 50 ms in all but really broken paths. Much more information is available at www.ntp.org. Dave J de Boyne Pollard wrote: > FC> Furthermore, even synchronized time is not synchronized > FC> to the precision of NFS timestamps. So not only is it not > FC> realistic to assume "precisly synchronized time" (relative > FC> to timestamp granularity), it is impossible. > > SR> How accurate are network time servers? > > Are you asking how _accurate_ they are or how _precise_ they are? > _______________________________________________ questions mailing list [email protected] https://lists.ntp.org/mailman/listinfo/questions
