I don't have any comparisons between syncing to a Stratum 2 vs stratum 1. I am 
very happy with a two level hierarchy - GPS based NTP appliances at stratum 1 
and all the servers and network gear at stratum 2.

In my GE LAN environment, with 64sec polling the servers claim offsets in the 
10us range.  

This is just my 2 cents:

I would go with multiple commercial NTP servers (eg Symmetricom, Spectracom, 
etc).  Install at least two GPS antennas on the roof; you could also split out 
each of the GPS signals to different NTP servers for equipment resiliency and 
for additional servers that can connect to your isolated zones.  

Configure all of your servers and network equipment to use the same NTP servers 
in their zone.  Don't use your network routers and switches as time servers.

NTP server appliances can support 2-4k NTP requests per second per interface.  
If you have 10k servers and hard code the max polling rate to say 64 seconds 
your load is only 156 requests/sec on average.  No big deal. But if you found 
that the requests were bunching up you still have options to distribute the 
load across the NTP server farm, or maybe a larger maxpoll is OK for you.   

I find that it is a LOT easier to manage a small number of stratum 1 appliances 
and a common server config than a multi-level hierarchy with rules for the 
server admins to follow about which servers to put in the config file.  These 
configs always get hosed up and you have to then audit them and fix the ones in 
error.  If you put the servers in DNS and have everyone use the exact same 
config you will minimize the chances that NTP will be configured wrong.

No matter what you do, you will likely get a few problem servers that have a 
hard time maintaining your desired offset, but you will know that it is not 
related to something in the hierarchy.

We had to get special NTP clients for any windows servers that needed accurate 
timing.  Regular desktops hit the domain controller (which points to the 
Stratum 1) and are probably in the 10+ms range, but that is due to the 
"quality" of the standard Windows timekeeping and not because it is using 
stratum 2 servers.

Good luck.


David

-----Original Message-----
From: [email protected] 
[mailto:[email protected]] On Behalf 
Of Andy Yates
Sent: Thursday, May 21, 2009 7:13 PM
To: [email protected]
Subject: [ntp:questions] Query about NTP accuracy

Does anybody have any figures that shows the effect on accuracy of an
NTP v3 client using a stratum 1 server rather than a stratum 2 or 3
server? It's all in a GE LAN based scenario, commercial stratum 1
servers connected to GPS and stratum 2 and 3 servers are typically
dedicated Linux boxes.

The reasons is that I would rather scale by adding strata - its a very
big data center with thousands of clients and has several "zones" that
are isolated. However some opinion is suggesting we run IRIG-B between
the GPS receiver and a bunch of stratum 1 servers and clients access
these directly. Much more expensive and any increase in accuracy from a
client experience may be negligible.

However I'm been pressed to supply an SLA for accuracy. My argument is
that although you can get your stratum one server to synchronize to
microseconds of UTP, as soon as the client uses NTP v3 over the LAN,
even a GE LAN, then the accuracy degrades and putting well designed well
specified stratum between the boxes is not going to decrease accuracy
sufficiently to warrant purchasing many stratum one appliances.

Thoughts?

Regards
Andy

_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.org/mailman/listinfo/questions
_______________________________________________
questions mailing list
[email protected]
https://lists.ntp.org/mailman/listinfo/questions

Reply via email to