David,
If you are using the kernel discipline, be advised it is not designed to
handle offsets greater than 0.5 s. Your question about the control law
is answered in the briefings on the NTP project page. Technically, the
control law is a hybrid, adaptive-parameter, phase/frequency feedback
loop, which of course doesn't tell you much. Control theorists would
recognize it as a somewhat modified second-order phase-locked loop.
Folks who can't stand to be jerked should first mumble ntptime -f 0.
Then remove the ntp.drift file and put disable kernel in the
configuration file.
Dave
ntptime -f 0
David T. Ashley wrote:
Thanks to everyone who responded. I do appreciate all of the good advice
about not using Stratum 1 servers during development, etc.
But the replies did not zero in on the key question. It is a control law
question. When ntpd starts and the server time is off of the true time by
an offset (say, 2 seconds), what "control law" does it apply to reconverge
the server's time and the true time?
Again, when I start ntpd with /var/lib/ntp/drift not present, it always
works. I've noticed that ntpd waits about an hour and 15 minutes to use the
time difference to adjust the "frequency" of the kernel timing parameters to
go towards convergence. It is only when the drift file exists that it
doesn't seem to try to converge the server's time and the true time.
I've adjusted my startup to always delete /var/lib/ntp/drift, and this works
fine. But I'd like to understand the control law involved.
By "control law" I mean the discrete time transfer function that influences
the stability and long-term convergence characteristics of the system.
I appreciate all of the replies. But none of them seem centered on the core
issue, which is the control law.
Thanks, Dave.
_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.isc.org/mailman/listinfo/questions