In <[EMAIL PROTECTED]> Brad Knowles <[EMAIL PROTECTED]> writes:

> At 10:06 AM -0500 2005-09-11, wayne wrote:
>
>>  Actually, network topology is only an approximation of what we want
>>  because different links have different latencies and jitter.
>
>       Topology is the closest approximation of what we want before we 
> start actually measuring latencies and jitter, because topology 
> directly maps to the number of routers, bridges, switches, etc... 

Well, what you were talking about doing was actually measuring stuff
via the TTL.  Using either ICMP ECHO or NTP packets would be more
productive and no more costly for our purposes, although it would
still require measuring stuff.  As you pointed out in a different
post, TTL counts don't account for all the network topology factors
due to ATM clouds and such.


So, once you are talking about measuring stuff, I think it would be
best measure relevant stuff.


You were also talking about using IP multicast.  There are many good
reasons why IP multicast has never taken off.  Unfortunately, I can't
discuss all the reasons why because I'm still under an NDA with Tibco
and their Rendezvous group.  Let's just say that doing multicast is
hard, and people are willing to pay a lot of money to those who can
make it work.


>>  If you want to put a bunch of work into configuring your NTP clients,
>>  you should get the complete NTP pool list, send NTP packets to each of
>>  them and see what kind of latency/jitter you get, and use the best.
>
>       Actually, one of the things we're looking at is implementing a 
> "servers" directive.  This won't make it into 4.2.1-RELEASE, and 
> probably won't make 4.2.2-RELEASE, but might make 4.2.3-RELEASE.
>
>       The idea being that you take all of the IP returned by the DNS 
> for a given label, and sets up unicast client/server relationships 
> with *all* of them.  It would then automatically sort through the 
> list and eliminate the lower-quality servers and make use of the 
> higher-quality servers.  Of course, ntpd would still only make use of 
> the top ten servers that it knows about, regardless of their source.

A "servers" directive might well be very useful.

I'm not sure it would be a good idea to use *all* of the IP addresses
returned as that would be a pretty big increase on the server loads in
the short run and many clients appear to not stick around for long.
Maybe start with 4 or 5 and gradually replace the worst ones, keeping
track of which onese weren't so bad in case the new onese are worse.

Even then, it would still be useful to start with a group that has
some chance of being more local than the random pool server.


Ask has also talked about using specialized DNS servers with
geographic information in them.  Again, this is obviously not perfect,
but it is going to be pretty common for an DNS server at a given
location on earth to serve DNS clients that are both geographically
close by and close by on the network.  So, if the small number of NTP
pool domain name servers use specialized software to feed a different
set of NTP services that are "close by" to the IP address of the name
server requesting the pool.ntp.org A records, it could still be a win.

Yes, geography is only a first-order approximation of the network
connections, but it is still better than random.


-wayne
_______________________________________________
timekeepers mailing list
[email protected]
https://fortytwo.ch/mailman/cgi-bin/listinfo/timekeepers

Reply via email to