My test server aided with rrdtool has narrowed down my pool of potential remote ntp servers to 17 healthy candidates with consistent latency, low jitter, and similar theories on the one true tick. That gives me 4 to hand to each of my peered internal servers, plus an extra to toss on at random.

One of these servers is dialing NIST periodically. With the server set it has, it never raises the polling interval from min (12) and never chooses it as a time source. The jitter seems rather high as well. I was under the impression that if I used the two NIST numbers, the refclock driver could deduce the line latency automatically (always reads 0 on the scoreboard) and took multiple samples per call to reduce jitter. Should I be using 'burst' on that server statement (is that even valid?). I'd like to see the server stretch the poll interval up to the max if it can, seems like calling NIST every hour is a bit extreme.

Also, as that server has access to a reference clock, would I be better served configuring just my peers on it and no external ip servers, instead putting them on my 3 other internal servers?

I've also seen mention of using burst with extended intervals for remote time sources mentioned in a few places. If I only have a given remote time source configured on one server, is using burst with a maxpoll of 11 considered abusive? Should I bump my minpoll if I do?

Joshua Coombs
_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.isc.org/mailman/listinfo/questions

Reply via email to