In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] (Bryan Henderson) wrote:
> for which there is a code assigned is described as, "uncalibrated > local clock used as a primary reference for a subnet without external > means of synchronization." The thread was actually about NTP, which, while it still has this function, is clearly designed on the assumption that all reference clocks provide true time. RFC 1305 doesn't seem to have a local clock concept, and I think its inclusion in later versions is more pragmatic than an indication of desirability. There are other protocols, which are designed to select amongst free running clocks. > The protocol also gives you a means of specifying precision so that an > SNTP server could say, "It's UTC 2006-08-17 13:34:01 give or take 2048 > seconds" if it wanted. Precision doesn't have that meaning. It's a measure of how accurately you can read the clock on the server, either because it is low resolution, or because of the time to physically read a high resolution clock. This is consistent with the SNTP and NTP specifications. The value in the packet that indicates the uncertainty in the time due to drift is root dispersion. root distance, in this case, in effect gives the error due to the initial setting of the clock. Note that the actual RFC 2030 document requires dispersion and delay to be set to zero and defines them in terms that don't make a special case of an uncalibrated local clock. However, I think my interpretation is more in line with the intended meaning of these fields. I presume the intention of using zero is to indicate that the fields are invalid. I have a feeling that the W32Time protocol reflects these unchanged, so, for an unsynchronised client, the result tends to indicate poor quality time. _______________________________________________ questions mailing list [email protected] https://lists.ntp.isc.org/mailman/listinfo/questions
