If he is using broadcast mode, what earthly difference does it make how often he broadcasts? After 10:02:25:57 of uptime, on my time server NTPD has only used 00:04:25 of CPU time. A broadcast takes NTPD literally less than a microsecond, and less than a millisecond to reach the clients. It broadcasts the time to my home network every 16 seconds. It is also connected to 9 nearby stratum 2 time servers with a minpoll of 4 and a maxpoll of 5 (= 32 seconds). Unless this list is it, no one has ever criticized me for querying a time server too frequently. It is completely unrealistic to query a time server every 1024 seconds (poll = 10) and expect accurate time thru a WAN whose delay varies between about 40 ms and about 200 ms continually and unpredictably. Where the normal delay is closer to 40 ms, one reading thru a delay of 180 ms can throw the time off for hours, given that offset is significantly and negatively correlated with delay. For example, one of my favorite time servers is clock02.sctn01.burst.net (or clock01). This guy often uses ntp.coi.pw.edu.pl as a time server, which is at a technical university in Warsaw, Poland. When he does connect to that server, his proffered time is about 17 ms less than the consensus of the other 8 stratum 2 servers. The same is nearly true when he uses a Microsoft time server located in Redmond, WA. Until and unless Internet congestion abates, and until and unless NTPD figures out why delay and offset are highly negatively correlated, a poll interval of 32 to 64 seconds is the only way to maintain accurate time, in my humble opinion.
Do correct me if I am wrong. Charles Elliott > -----Original Message----- > From: [email protected] > [mailto:[email protected]] On > Behalf Of unruh > Sent: Tuesday, July 30, 2013 3:31 PM > To: [email protected] > Subject: Re: [ntp:questions] ntpq -p command query > > On 2013-07-30, Biswajit Panigrahi <[email protected]> wrote: > > Hi, > > > > I have configured the ntp server and ntp client on two machine. > > > > Both are communicating properly. I would like to test when the > > connectivity between those two goes down, after how much time the > > "reach" option in ntpq -p command becomes zero. > > > > For that I stopped the ntp server and I executed the ntpq -p in > > client's console, > > > > The reach option will still keep on increase to 377 then gradually > > decreases to zero. The time duration to come to zero is almost 20 > > minute. > > > > Can we reduce the time gap ? > > Why? Why do you care how rapidly the reach option goes to 0? > ntpd only queries the server occasionally (default is 2^6 (about 1min > on > startup and 2^10 (about 20 min) sec after things have stabilized.) Each > time one of those attempts fails, one bit is lost from the reach, so it > will go to zero after about 3 hrs. That is the way that ntpd works. > Now you can if you wish have the client query more often. If you own > the > server that is fine. If you do not own the server, that is considered > very bad manners, and the server may refuse to serve you anymore. > > > A shorter poll interval means that the offset is smaller but the rate > correction is more inaccurate, meaning that if your system goes down > the > clock will become inaccurate more rapidly. Since on a local net, > keeping > the times within a few 10s of microseconds is very doable, it is not > clear what youwant. > > > > > > > > > Please find the ntp.conf in client: > > > > > > > > server 10.16.48.19 key 1 > > > > restrict 10.16.48.19 mask 255.255.255.255 nomodify notrap noquery > > > > restrict 127.0.0.1 > > > > broadcastclient novolley > > > > broadcastdelay 0 > > > > keys /var/ntp/keys > > > > trustedkey 1 > > > > logfile /var/log/ntp/ntpd.log > > > > driftfile /var/log/ntp/ntp.drift > > > > statsdir /var/log/ntp/ > > > > statistics loopstats peerstats > > > > filegen loopstats file loopstats type day enable > > > > filegen peerstats file peerstats type day enable > > > > > > > > Any suggestion will really appreciated. > > Stop worrying. > > > > > > > > > > Regards, > > > > Biswajit > > > > > >====================================================================== > ====================================================== > > Disclaimer: This message and the information contained herein is > proprietary and confidential and subject to the > > Tech Mahindra policy statement, you may review the policy at <a > href="http://www.techmahindra.com/Disclaimer.html">http://www.techmahin > dra.com/Disclaimer.html</a> > > externally and <a > href="http://tim.techmahindra.com/tim/disclaimer.html">http://tim.techm > ahindra.com/tim/disclaimer.html</a> internally within Tech Mahindra. > >====================================================================== > ====================================================== > > _______________________________________________ > questions mailing list > [email protected] > http://lists.ntp.org/listinfo/questions _______________________________________________ questions mailing list [email protected] http://lists.ntp.org/listinfo/questions
