On Sep 17, 2010, at 10:58 AM, Daniel Havey wrote:
> Hmmm, I'm not sure that I believe you guys ;^)  

So you've said before, and I've certainly gotten the impression that you would 
prefer to make your own mistakes rather than heed advice about best practices.

> This is a wireless emulator on a wired testbed, and the packets record a 
> start of transmission time on one computer, and then a Start of Reception 
> time (SoR) on another computer.  If the clocks have different times then the 
> calculation of noise caused by other packets will get screwed up because the 
> receiving computer will either stay in RxPending too long, or not long enough.
> 
> I think that the slewing behavior is worse than the ntpdate behavior of 
> suddenly changing the time, because the time will remain wrong for a longer 
> period of time.

Running ntpdate -b causes the clock to be forcibly reset after exchanging 8 NTP 
packets to try to estimate and take into account round-trip time.  However, the 
limited scope of measurement involved is fairly susceptible to network delays 
due to a momentary traffic peak, routing latency, or other causes, and -b flag 
invokes settimeofday() rather than the more graceful correction of the clock 
via adjtime() which ntpd or even ntpdate without -b flag would use.

Running ntpdate every second, or 3 times every five seconds, would involve 
~20,000 packets per machine per hour, compared with the half-dozen or so needed 
for ntpd over that same interval.  I can't imagine why anyone would prefer to 
generate nearly four orders of magnitude more network traffic in order to keep 
significantly worse time then you would by simply running ntpd with it's 
default config.

As someone else just noted, the traffic volume generated by that script would 
be considered abusive to public NTP servers.  If it truly was recommended by 
some hardware manufacturer, whoever it was is simply not qualified to give 
advice about keeping good network time.

The approach they've recommended is unlikely to keep clocks synchronized closer 
than on the order of tens of milliseconds, with 20-50ms jumps very likely 
happening every few seconds.  Running ntpd even with a single network source is 
likely to achieve synchronization at the milliseconds level of accuracy without 
abrupt changes, and with even a bit more work, can provide ~1ms to 
sub-millisecond accuracy across fleets of hundreds of machines.

Feel free to measure both approaches yourself and compare...

Regards,
-- 
-Chuck

_______________________________________________
questions mailing list
[email protected]
http://lists.ntp.org/listinfo/questions

Reply via email to