On 27 Dec, 2012, at 08:05 , Chris Albertson <[email protected]> wrote: >> You do not need to use something like the Clock-Block to build a very good >> NTP server, but if you want to build the *ultimate* server it is part of the >> mix. > > Yes this is true. The server can be "very good", meaning that if it > were better the clients that it servers could not "know" the > difference. A simple is if a wall clock moved the hands with > millisecond precision, it would not serve the clients (human eyeballs) > any better if it moved with nanosecond precision because human > perception is measured in mS not nS. Same with the time server, it > communicates with its clients over a network that has someuncertainy > in th delay and ultra-presision is lost. So nanosecond level > timekeeping in the server is not required. You can do uSec level > time keeping with the standard TTL can on most mother boards. > However this list is for "nuts" and you might think it is fun to try > and do 1000 times better time keeping than is needed, in that case you > will need some kind of specialized clock hardware.
I don't think I buy this. It takes 70 milliseconds for a signal transmitted from a GPS satellite to be received on the ground, but we don't use this fact to argue that sub-70 ms timing from GPS is not possible. If you have a network of high-bandwidth routers and switches doing forwarding in hardware, and carrying no traffic other than the packets you are timing (I have access to lab setups that can meet this description) you can observe packet delivery times that are stable at well under the microsecond level even though the total time required to deliver a packet is much larger. If you add competing traffic, like real life networks, the packet-to-packet variability becomes much worse, but this is sample noise that can be addressed by taking larger numbers of samples and filtering based on the expected statistics of that noise. That is, the level of noise effecting each individual sample entering the filter does not alone predict the noise level of the result coming out, the latter also depends on the number of samples and the quality of the model of the noise employed by the filter. Note that I often see claims of time synchronization with PTP at the 10 ns level or better. As this level of synchronization is usually achieved by the brute force method of measuring transit times across every network device on the path from source to destination I have no doubt that what NTP can do will necessarily be worse than this, but I don't know of a basis that would predict whether NTP's "worse" is necessarily going to be 10,000x worse or can be just 10x worse. Knowing that would require actually trying it to measure what can be done. What is certain, however, that if you want to measure this at the levels that might be possible you probably want nanosecond-level clock hardware in both the server, so that it can produce time of this quality, and in the clients, so that you can measure how well they do directly rather than attempting to have the NTP implementation grade its own homework. I don't think this is a waste of time at all. The best case is that the ability to measure at this level would lead to an understanding of what it would take to transfer time with NTP at this level, but even the worst case would be that one would be able to support one's assertions about what can't usefully be done with data, and that's not bad either. If no one tries then no one will ever know. Dennis Ferguson _______________________________________________ time-nuts mailing list -- [email protected] To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
