On Fri, Jun 23, 2017 at 04:49:49PM -0400, Chris Perl wrote:
> What I'm trying to understand is why on machine_b, I consistently see
> a "Root delay" and "Root dispersion" of 15us in the output of `chronyc
> ntpdata'. It doesn't vary, its always 15us.
> I've verified that the packets themselves flowing from server to
> client have a "Root Delay" and a "Root dispersion" of 0 (via tcpdump),
> so I'm guessing this must be getting calculated on the client, but I
> can't figure out where or how.
The root delay and dispersion fields printed by the ntpdata command
are the values from the received packet. They should be the same as
printed by tcpdump. Can you post tcpdump -v -x output?
The reason why they are always 15 microseconds is that the fields have
a 32-bit fixed-point format with ~15 microsecond resolution and
chronyd as a server rounds them up. So, if it calculates its delay and
dispersion as 1 microsecond, they will still be rounded to 15
microseconds. It's a limitation of the NTPv4 protocol. I'd like to
improve this in NTPv5 when the NTP WG starts working on a new version.
If you wanted to get a more accurate root distance on the client, you
could set the delay of the SHM refclock on the server to 0 and add
half of the delay to the precision instead. I think it might have a
small effect on the timekeeping performance though.
To unsubscribe email chrony-users-requ...@chrony.tuxfamily.org
with "unsubscribe" in the subject.
For help email chrony-users-requ...@chrony.tuxfamily.org
with "help" in the subject.
Trouble? Email listmas...@chrony.tuxfamily.org.