Re: [ntp:questions] no drift-file on 2008 R2 vps and the time diff. is getting bigger and bigger?
In article 5414139e.60...@gmx.at, gooly go...@gmx.at wrote: Hi, I just installed ntp (Meinberg, once on Win7, once on a vps 2008 R2). On my Win 7 I see in C:\Program Files (x86)\NTP\etc the drift file but on the vps there is no drift-file? And after starting (and several restarts) the time-difference getting bigger and bigger. What's going wrong? Ntp is running, I can see it in the Task-Manager. The conf file (without comments): restrict default nomodify notrap nopeer noquery restrict 127.0.0.1 restrict -6 ::1 driftfile C:\Program Files (x86)\NTP\etc\ntp.drift server 0.de.pool.ntp.org iburst server 1.de.pool.ntp.org iburst server 2.de.pool.ntp.org iburst server 1.nl.pool.ntp.org iburst server 2.uk.pool.ntp.org iburst # Use specific NTP servers server 'ts2.aco.net' iburst server 'ts1.univie.ac.at' iburst server '0.at.pool.ntp.org' iburst server '1.at.pool.ntp.org' iburst server ntp1.m-online.net iburst server ptbtime1.ptb.de iburst server 0.de.pool.ntp.org iburst server 1.de.pool.ntp.org iburst server ntps1-0.eecsit.tu-berlin.de iburst server time.fu-berlin.de iburst server ntp.probe-networks.de iburst server zeit.fu-berlin.de iburst # End of generated ntp.conf --- Please edit this to suite your needs If the NTP daemon is running yet the local clock drifts ~linearly, a common cause is that the daemon does not have sufficient privilege to adjust the local clock, and so the clock-adjust requests are being silently ignored by the operating system kernel. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] no drift-file on 2008 R2 vps and the time diff. is getting bigger and bigger?
In article 541459ff.8000...@gmx.at, gooly go...@gmx.at wrote: Am 13.09.2014 16:09, schrieb Joe Gwinn: In article 5414139e.60...@gmx.at, gooly go...@gmx.at wrote: Hi, I just installed ntp (Meinberg, once on Win7, once on a vps 2008 R2). On my Win 7 I see in C:\Program Files (x86)\NTP\etc the drift file but on the vps there is no drift-file? And after starting (and several restarts) the time-difference getting bigger and bigger. What's going wrong? Ntp is running, I can see it in the Task-Manager. The conf file (without comments): restrict default nomodify notrap nopeer noquery restrict 127.0.0.1 restrict -6 ::1 driftfile C:\Program Files (x86)\NTP\etc\ntp.drift server 0.de.pool.ntp.org iburst server 1.de.pool.ntp.org iburst server 2.de.pool.ntp.org iburst server 1.nl.pool.ntp.org iburst server 2.uk.pool.ntp.org iburst # Use specific NTP servers server 'ts2.aco.net' iburst server 'ts1.univie.ac.at' iburst server '0.at.pool.ntp.org' iburst server '1.at.pool.ntp.org' iburst server ntp1.m-online.net iburst server ptbtime1.ptb.de iburst server 0.de.pool.ntp.org iburst server 1.de.pool.ntp.org iburst server ntps1-0.eecsit.tu-berlin.de iburst server time.fu-berlin.de iburst server ntp.probe-networks.de iburst server zeit.fu-berlin.de iburst # End of generated ntp.conf --- Please edit this to suite your needs If the NTP daemon is running yet the local clock drifts ~linearly, a common cause is that the daemon does not have sufficient privilege to adjust the local clock, and so the clock-adjust requests are being silently ignored by the operating system kernel. Joe Gwinn ok, but 1) I have tried the Windows user SYSTEM with the same result: no time-change, 2) the time-user is the ntp-default user ntp which (I guess) by default requires all the privileges it needs. Never mind the details of how startup was supposedly achieved, one can simply ask the operating system to tell the priority and scheduling class under which the daemon is running. (I don't recall the specific command, but someone will chime in.) This is the quickest way to narrow the possibilities. And this would not explain why ntp.drift is not created? As others have pointed out, if NTP is not succeeding, it need not generate a drift file. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] ntp-4.2.6p5 on Win 7 x64
In article 84lv9b-i5v@ubuntu-server-1.py.meinberg.de, Martin Burnicki martin.burni...@meinberg.de wrote: Nick wrote: Brian thanks for your useful reply... On Fri, 18 Jul 2014 02:03:48 +, Brian Inglis wrote: Windows is using the MM timer as its high precision time source, not PM timer, HPET or TSC. Huh, I didn't know (and I doubt) that NTP uses the MM timer as time source. Years ago there was a problem with NTP under Windows that the interpolated system time used internally by ntpd got messed up when some *other* program started to set the Windows MM timer to highest resolution, e.g to play some multimedia stuff. As a workaround I submitted a patch to ntpd which lets the NTP service itself set the MM timer resolution high when it starts, so there are no more other effects on the interpolated time when some other application also does so. The workaround comes in effect if the -M parameter is given on the command line, and this is the default case for installations from the Meinberg setup program. Back in the day, video (and later audio) drivers were the culprit in messing realtime up as well. The drivers performed a hardware bus lock of some kind during transfers. This in addition to setting a very high priority, and trumped anyone else setting high priority. I never knew the exact details, but it was one of many reasons why one does not use Windows for realtime, unless one's idea of realtime is *very* relaxed. Although Windows has improved over the years, its latency still isn't predictable enough for realtime of any stringency. Reliability is also an issue - the rule is that if one cannot tolerate a forced reboot every few months or so, don't use Windows. I know that many Windows systems don't fall over nearly that often, but some do, often for no known reason, and that's enough. Only worst-case behavior matters. A typical architecture in my world is to use Linux or a RTOS like VxWorks for the RT core, talking via UDP exchange over ethernet to a Windows box housing the GUI. Humans don't notice the timing burps, and can reboot the GUI box as needed. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] Can NTP sync within 1ms
In article ljf8q0$mt2$3...@dont-email.me, William Unruh un...@invalid.ca wrote: On 2014-04-26, Joe Gwinn joegw...@comcast.net wrote: In article 8188ba2b01fb534a99c03d79c62ce1d80982f...@uusnwe3a.global.utcmail.com, Montgomery, Peter BIS peter.montgom...@fs.utc.com wrote: I am new to NTP. But I have a quick question that I need to answer soon. I would like to know whether NTP can sync between a client and a server within 1ms if the client and server are Linux applications on a simple local network ( less than 10 nodes). Not reliably, for a million reasons. A better rule is 10 milliseconds, and even that requires work and care. Well, since I have 8 machines that reliably sync from one GPS PPS driven machine (all using chrony) and they get time reliability of about 10microseconds, your experience seems a bit different than mine. And how did you determine that you were only getting 10ms. As I said, you cannot conclude that by looking at the offsets reported by ntpd. I've managed 7 microseconds rms in a sterile lab setup, but I would hardly tell people that they will achieve anything like that in an unconstrained and complex network. We know next to nothing about the OP's network et al, so I quoted a worst case. With added data, we may be able to tighten the estimate. Or, widen it if there turns out to be something unfortunate in the design. The most reliable approach is an IRIG network. Or put a gps pps receiver onto each machine. Yes, but IRIG is simpler and cheaper, and the OP asked for synchronized time, and did not mention any need for accurate time. For all we know, the OP has no access to the sky. But you need to describe your problem and constraints before people can give better answers. Agreed. And this is the key. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] Can NTP sync within 1ms
In article 8188ba2b01fb534a99c03d79c62ce1d80982f...@uusnwe3a.global.utcmail.com, Montgomery, Peter BIS peter.montgom...@fs.utc.com wrote: I am new to NTP. But I have a quick question that I need to answer soon. I would like to know whether NTP can sync between a client and a server within 1ms if the client and server are Linux applications on a simple local network ( less than 10 nodes). Not reliably, for a million reasons. A better rule is 10 milliseconds, and even that requires work and care. The most reliable approach is an IRIG network. But you need to describe your problem and constraints before people can give better answers. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] Asymmetric Delay and NTP
In article lh9fe1$plc$2...@dont-email.me, William Unruh un...@invalid.ca wrote: On 2014-03-30, Joe Gwinn joegw...@comcast.net wrote: Magnus, In article 53375aba.5070...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: On 24/03/14 14:38, Joe Gwinn wrote: [snip] [MD] The *one* thing you can figure out with more measurements is how non-zero-mean noise such as network traffic contribute to asymmetry. You can do pretty good approximations of that contribution. However, if there is an underlying asymmetry in static delay sources, they won't disclose themselves with more measurements of the set measurements. [JG] Yes. One way to think of it is to describe the delay asymmetry as a random process having a mean and a standard deviation. One can estimate the standard deviation, but not the mean, from received timestamp packets. IF there is another source to compare it with. If you only have one source of time, you cannot differentiate the fluctuations in the assymetry from the fluctuations in the remote or local clock. With more than one source you can, but as you say, you cannot discover the mean. While for short term fluctuations in the asymmetery you could probably assume that they are path, not clocks, for longer term ones you cannot. We generally assume that the clocks are good, which is to say that they don't jump around nearly as rapidly the the delay asymmetry. So one can tease clock offset from standard deviation of delay asymmetry by their spectra. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] IEEE 1588 (PTP) at the nanosecond level - followup
In article lh1kir$vie$1...@dont-email.me, William Unruh un...@invalid.ca wrote: On 2014-03-27, Joe Gwinn joegw...@comcast.net wrote: Well, it worked, at least partially. One group backed off to the point of depending on PTP for few microsecond sync error, versus a few tens to a hundred nanoseconds. This should work. It sounded and sounds like they really had no idea what they needed and just pulled figures out of the air. No, they knew very well what they needed for one proposed approach, which had many advantages. But it turned out that 1588 isn't there yet, so a different approach was chosen. What they didn't know was the details of time. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
[ntp:questions] IEEE 1588 (PTP) at the nanosecond level - followup
Well, it worked, at least partially. One group backed off to the point of depending on PTP for few microsecond sync error, versus a few tens to a hundred nanoseconds. This should work. As for the sub-microsecond sync error, maybe someday. For the other group, the hope probably still lives. I'll ask in a few weeks. Anyway, thanks to everybody for all the help. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] Asymmetric Delay and NTP
Magnus, In article 532fa47b.7060...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: Joe, On 23/03/14 23:20, Joe Gwinn wrote: Magnus, In article 532e45db.5000...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: Joe, On 21/03/14 17:04, Joe Gwinn wrote: [snip] It is interesting. I've now read it reasonably closely. The basic approach is to express each packet flight in a one-line equation (a row) in a linear-system matrix equation, where the system matrix (the A in the traditional y=Ax+b formulation, where b is zero in the absence of noise), where A is 4 columns wide by a variable number of rows long (one row to a packet flight), and show that one column of A can always be computed from the two other columns that describe who is added and subtracted from who. In other words, these three columns are linearly dependent on one another. The forth column contains measured data. This dependency means that A is always rank-deficient no matter how many packets (including infinity) and no matter the order, so the linear system cannot be solved. It is just another formulation of the same equations I provided. For each added link, one unknown and one measure is added. For each added node, one unknown is added. True, but there is more. Let's come back to that. As you do more measures, you will add information about variations the delays and time-differences between the nodes, but you will not disclose the basic offsets. Also true. The advantage of the matrix formulation is that one can then appeal to the vast body of knowledge about matrixes and linear systems. It's not that one cannot prove it without the matrixes, it's that the proof is immediate with them - less work. And the issue was to prove that no such system could work. As much as I like matrix formulation, it ain't giving you much more in this case, rather than a handy notation. The trouble is that beyond the properties of the noise, there is no information leakage about the static time-errors and asymmetries. You end up having free variables. Yes. You correctly noted the mathematical equivalence of the two approaches, and I agree. My point was that the matrix approach is less work to get to the desired proof because by formulation as a linear solution with matrixes, one immediately inherits lots of properties and proofs. The problem is that the unknown and the relationships builds up in an uneven rate, and the observations only relate to two unknowns. The only trustworthy fact we get is the sum of the delays, but no real hint about its distribution. If you do more observations along the same paths, you can do some statistics, but you won't get un-biased result without adding a prior knowledge one way or another. Formulate it as you wish, but as you add more observations, those will be reduced to by their linear properties to equations existing and noise. You need to add observations which does not fully reduce in order for your equation system to grow to such size that you can solve it. Yes, this is a good statement of the consequences of the proof. Show me how you achieve it, and I listen. I don't understand the challenge. There is no dispute. The no matter the order part comes from the property of linear systems that permuting the rows and/or columns has no effect, so long as one is self-consistent. So far, I have not come up with a refutation of this approach. Nor have the automatic control folk - this proof was first published in 2004 into a community that knows their linear systems, and one would think that someone would have published a critique by now. The key mathematical issue is if there are message exchange patterns that cannot be described by a matrix of the assumed pattern. If not, the proof is complete. If yes, more work is required. So far, I have not come up with a counter-example. It takes only one to refute the proof. It is only by cheating that you can overcome the limits of the system. Is GPS cheating? That's our usual answer, but GPS isn't always available or possible. If you are trying to solve it within a network, it is. You can convert your additional GPS observation into an a prior knowledge, and once you done enough of those, then you can solve it completely. The estimated variables better stay static thought, or you have to start over again. GPS is the usual answer, but isn't always available or useful. Recall that the original question was random asymmetry due to asymmetric background traffic in a PTP network. If the network is controllable, a lab experiment is to simply turn the background traffic off and see how much the clocks change with respect to one another. But this tells one how much trouble one is in, but does not solve the problem. The solution will be found in better choice
Re: [ntp:questions] Asymmetric Delay and NTP
Magnus, In article 532e45db.5000...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: Joe, On 21/03/14 17:04, Joe Gwinn wrote: [snip] Will see if I can find Dave's reference. I hit pay dirt yesterday, while searching for data on outliers in 1588 systems. Dave's reference may well be in the references of the following article. Fundamental Limits on Synchronizing Clocks Over Networks, Nikolaos M. Freris, Scott R. Graham, and P. R. Kumar, IEEE Trans on Automatic Control, v.56, n.6, June 2011, pages 1352-1364. Sounds like an interesting article. Always interesting to see different peoples' view of fundamental limits. It is interesting. I've now read it reasonably closely. The basic approach is to express each packet flight in a one-line equation (a row) in a linear-system matrix equation, where the system matrix (the A in the traditional y=Ax+b formulation, where b is zero in the absence of noise), where A is 4 columns wide by a variable number of rows long (one row to a packet flight), and show that one column of A can always be computed from the two other columns that describe who is added and subtracted from who. In other words, these three columns are linearly dependent on one another. The forth column contains measured data. This dependency means that A is always rank-deficient no matter how many packets (including infinity) and no matter the order, so the linear system cannot be solved. It is just another formulation of the same equations I provided. For each added link, one unknown and one measure is added. For each added node, one unknown is added. True, but there is more. As you do more measures, you will add information about variations the delays and time-differences between the nodes, but you will not disclose the basic offsets. Also true. The advantage of the matrix formulation is that one can then appeal to the vast body of knowledge about matrixes and linear systems. It's not that one cannot prove it without the matrixes, it's that the proof is immediate with them - less work. And the issue was to prove that no such system could work. To achieve that, you would need to bring correct time to one node and then observe that offset. Once measured the system increases. As you do this for all nodes, then calibration can be performed and the system be fully resolved. The no matter the order part comes from the property of linear systems that permuting the rows and/or columns has no effect, so long as one is self-consistent. So far, I have not come up with a refutation of this approach. Nor have the automatic control folk - this proof was first published in 2004 into a community that knows their linear systems, and one would think that someone would have published a critique by now. The key mathematical issue is if there are message exchange patterns that cannot be described by a matrix of the assumed pattern. If not, the proof is complete. If yes, more work is required. So far, I have not come up with a counter-example. It takes only one to refute the proof. It is only by cheating that you can overcome the limits of the system. Is GPS cheating? That's our usual answer, but GPS isn't always available or possible. Yes. In closed networks, the biggest cause of asymmetry I've found is interference between NTP traffic and heavy background traffic in the operating system kernels of the hosts running application code. Another big hitter was background backups via NFS (Network File System). The network switches were not the problem. What greatly helps is to have a LAN for the heavy applications traffic, and a different LAN for NTP and the like, forcing different paths in the OS kernel to be taken. If you can get your NIC to hardware time-stamp your NTP, you will clean things up a lot. True, but these were never much available, and the rise of PTP may obviate the need. Today PTP has made it available also for NTP. Hardware-timestamped NTP also exists and is commercially available. Although, if one goes to the trouble to make a NIC PTP-capable, it wouldn't be so hard to have it recognize and timestamp passing NTP packets. The hard part would be figuring out how to transfer this timestamp data from collection in the NIC to point of use in the NTP daemon, and standardizing the answer. The Linux-kernel has such support. NTPD has already some support for such NICs included. All true. But I'm reluctant to recommend a solution that lacks a common standard and/or has fewer than three credible vendors supporting that standard. I have no doubt that these things will come to pass, but we are not there just yet. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] IEEE 1588 (PTP) at the nanosecond level?
Magnus, In article 532e42c8.6080...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: Joe, On 21/03/14 16:17, Joe Gwinn wrote: Magnus, Thus, another fairly severe environment. I have a personal war story from 1992: At a Air Traffic Control center in Canada, one 19 cabinet had the green (safety ground) and white (power neutral) cables transposed. This caused 2.3 Vrms at 180 Hz to appear between the VMEbus ground and the cabinet shell, with enough oomph to cause a small spark when oscilloscope probe grounding clip was connected to that VMEbus ground, this causing the system (and my heart) to crash. If left connected, the ground clip became warm. And how can ground generate a spark, even a small one? Fixing the grounds dropped the offset to around ten millivolts. The 180 Hz arose because the power supplies were single-phase capacitor-input, driven from the legs of three phase prime power. Power neutral isn't really neutral when it takes a lot of beating. Similarly, a grounding wire isn't doing much grounding as frequency goes up. Yes, but the issue was that by transposing the two grounds, they forced the return current to take the long way around. The fact that the offset was at 180 Hz didn't help for sure, but I don't know that that's high enough for frequency to be a major cause. Plain old resistance in the path from the antenna face back to the main power panel (where the two grounds are intentionally connected together) probably suffices. That fails economically - might as well stick to IRIG. Indeed. Doing 1 us level might be possible, going lower than that will cause you more and more grey hairs one way or another. Well, now, this could be an advantage -- my hair is already gray, and more could be better. Well, you may have younger colleagues who fail to have this advantage. I knew you would make the comment. :) Not to worry -- time will make them gray. There is a truism in the standards world, that it take three major releases (versions) of a standard for it to achieve maturity. PTP is at version 2, so one more to go. I'd say it depends on for what application. The trouble is when the assumed applications increase at a quicker rate than the standard adapts to handle them. It does, but having the market grow faster than the standards cycle can be the mark of success. To some degree. Being perceived to be a solution isn't the same as it being a solution. Yes, but it's a batter problem to have than the converse. By the way, development of the third revision of 1588 started in 2013. I joined what purported to be their reflector, but now that you mention it I haven't gotten any traffic -- Something must be wrong. I will need to enquire. They formally had their first session at the ISPCS in Lemgo. OK. I'm sure that my not getting any traffic is a local problem. I'll query the WG chairman. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] IEEE 1588 (PTP) at the nanosecond level?
Magnus, In article 532b5621.1040...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: Hi Joe, On 20/03/14 01:53, Joe Gwinn wrote: In article 5328ad2...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: On 18/03/14 01:24, Joe Gwinn wrote: I've used IRIG-B004 DCLS before, for cables two meters long within a cabinet. Worked well. How well do they handle 100 meter cables, in areas where the concept of ground can be elusive? The rising edge of the 100 Hz is your time reference, the falling edges is your information. Proper signal conditioning and cabling should not be a problem given proper drivers and receivers. IRIG-B004 DCLS also travels nicely over optical connections, and grounding issues will be less of a problem. Known to work well in power sub-stations, so there can be off the shelf products if you look for them. That's a pretty severe environment. I thought it would get your attention. It did at that. I should give more context: On ships at full steam, there can be a steady seven volts rms or so at power frequency (and harmonics) between bow and stern, which will cause large currents to flow in the shield. This is well below the frequency at which inside and outside shield currents become decoupled due to skin effect, so the full voltage drop in the shield may be seen on the center conductor. We use optical links a lot, and triax some. One can also make RF boxes largely immune with a DC-block capacitor in series with the center conductor. Thus, another fairly severe environment. I have a personal war story from 1992: At a Air Traffic Control center in Canada, one 19 cabinet had the green (safety ground) and white (power neutral) cables transposed. This caused 2.3 Vrms at 180 Hz to appear between the VMEbus ground and the cabinet shell, with enough oomph to cause a small spark when oscilloscope probe grounding clip was connected to that VMEbus ground, this causing the system (and my heart) to crash. If left connected, the ground clip became warm. And how can ground generate a spark, even a small one? Fixing the grounds dropped the offset to around ten millivolts. The 180 Hz arose because the power supplies were single-phase capacitor-input, driven from the legs of three phase prime power. Maybe, depends on your needs. Consider doing a separate network for PTP. That approach have been used in systems where you want to make sure it works. That fails economically - might as well stick to IRIG. Indeed. Doing 1 us level might be possible, going lower than that will cause you more and more grey hairs one way or another. Well, now, this could be an advantage -- my hair is already gray, and more could be better. This is my fear and instinct. But people read the adverts and will continue to ask. And some customers will demand. So, I'm digging deeper. Are there any good places to start? You asked here, it's not the worst place to start. :) To be sure. There is a truism in the standards world, that it take three major releases (versions) of a standard for it to achieve maturity. PTP is at version 2, so one more to go. I'd say it depends on for what application. The trouble is when the assumed applications increase at a quicker rate than the standard adapts to handle them. It does, but having the market grow faster than the standards cycle can be the mark of success. By the way, development of the third revision of 1588 started in 2013. I joined what purported to be their reflector, but now that you mention it I haven't gotten any traffic -- Something must be wrong. I will need to enquire. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] Asymmetric Delay and NTP
Magnus, In article 532b5ab9.4070...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: Joe, On 19/03/14 11:55, Joe Gwinn wrote: In article 5328aaa6.70...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: On 18/03/14 01:36, Joe Gwinn wrote: In article 5327757e.5040...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: Is that formal enough for you? It may be. This I did know, and would seem to suffice, but I recall a triumphant comment from Dr. Mills in one of his documentation pieces. Which I cannot recall well enough to find. It may be the above analysis that was being referred to, or something else. I can't recall. The above I came up with myself some 10 years ago or so. When I awoke the day after writing the above, I saw two problems with the above analysis. First is that with added message-exchange volleys, one does not get added variables and equations, one instead gets repeats of the equations one already has. If there is no noise, the added volleys convey no new information. If there is noise, multiple volleys allows one to average random noise out. True. What does happen over time is: 1) Clocks drift away from each other due to systematics and noises 2) The path delay shifts, sometimes because of physical distance shifts, but also due to shift of day and season. These require continuous tracking to handle Yes. Second is that what is proven is that a specific message-exchange protocol cannot work, not that there is no possible protocol that can work. The above analysis only assumes a way to measure some form of signal. The same equations is valid for TWTFTT as for NTP, PTP or whatever uses the two-way time-transfer. What will differ is they way they convey the information and the noise-sources they see. It's certainly true that the two-way time transfer (including bouncing-packet) approach works well, and is widely used, but all are sensitive to asymmetry, and the overarching question was if this limitation is fundamental and thus unavoidable. As discussed later, it appears that the limitation is fundamental. Will see if I can find Dave's reference. I hit pay dirt yesterday, while searching for data on outliers in 1588 systems. Dave's reference may well be in the references of the following article. Fundamental Limits on Synchronizing Clocks Over Networks, Nikolaos M. Freris, Scott R. Graham, and P. R. Kumar, IEEE Trans on Automatic Control, v.56, n.6, June 2011, pages 1352-1364. Sounds like an interesting article. Always interesting to see different peoples' view of fundamental limits. It is interesting. I've now read it reasonably closely. The basic approach is to express each packet flight in a one-line equation (a row) in a linear-system matrix equation, where the system matrix (the A in the traditional y=Ax+b formulation, where b is zero in the absence of noise), where A is 4 columns wide by a variable number of rows long (one row to a packet flight), and show that one column of A can always be computed from the two other columns that describe who is added and subtracted from who. In other words, these three columns are linearly dependent on one another. The forth column contains measured data. This dependency means that A is always rank-deficient no matter how many packets (including infinity) and no matter the order, so the linear system cannot be solved. The no matter the order part comes from the property of linear systems that permuting the rows and/or columns has no effect, so long as one is self-consistent. So far, I have not come up with a refutation of this approach. Nor have the automatic control folk - this proof was first published in 2004 into a community that knows their linear systems, and one would think that someone would have published a critique by now. The key mathematical issue is if there are message exchange patterns that cannot be described by a matrix of the assumed pattern. If not, the proof is complete. If yes, more work is required. So far, I have not come up with a counter-example. It takes only one to refute the proof. I also took the next step, which is to treat d_AB and d_BA as random variables with differing means and variances (due to interference from asymmetrical background traffic), and trace this to the effect on clock sync. It isn't pretty on anything like a nanosecond scale. The required level of isolation between PTP traffic and background traffic is quite stringent. It's even worse when you get into packet networks, as the delays contain noise sources of variable mean and variable deviation, besides being asymmetrical. NTP combats some of that, but doesn't get deep enough due to too low packet rate. PTP may do it, but it's not in the standard so it will be propritary algorithms. The PTP standard is a protocol framework. ITU have
Re: [ntp:questions] IEEE 1588 (PTP) at the nanosecond level?
In article yvqdnqukc7qjb7fonz2dnuvz_vudn...@megapath.net, Hal Murray hal-use...@ip-64-139-1-69.sjc.megapath.net wrote: In article 190320142025178186%joegw...@comcast.net, Joe Gwinn joegw...@comcast.net writes: The original issue was to be able to drop IRIG support in honor of PTP via the ethernet infrastructure we always need. How stable is your local clock? These systems are tied to GPS with Rubidium local oscillators. The end nodes have crystal oscillators, some temperature compensated, and very few being oven stabilized. What sort of accuracy to you need? Varies. In some systems, it's a hard limit -- absolute error cannot exceed one microsecond. In others, it's that we are trying to synchronize independent subsystems to a fraction of a microsecond with great reliability, which leads us to focus on the probability distribution of errors - the distribution tails matter. How much traffic will be on the ethernet you want to use? Varies as well. The timing-related traffic is negligible. The application traffic is significant, is a mix of big (1.5 Kbyte) and small (64 byte) packets, but still a gigabit enet link is loafing. The big issue is as always timing-related packets being buffeted by the application traffic, because we care about outliers, not just means and medians. In many cases, this is solved by having a dedicated LAN for timing-related traffic, but this isn't always possible. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] Asymmetric Delay and NTP
In article lgcoq8$b4j$1...@dont-email.me, E-Mail Sent to this address will be added to the BlackLists Null@BlackList.Anitech-Systems.invalid wrote: Joe Gwinn wrote: What greatly helps is to have a LAN for the heavy applications traffic, and a different LAN for NTP and the like, forcing different paths in the OS kernel to be taken. If you see a difference, I would have though it more likely to be different usage of the IP stacks than anything significantly different happening in the kernel? Yes, the protocol stacks are where in the kernel that the problem most likely happens. If NTP is important enough, create a NTP VLAN set to a higher priority than all other traffic, to isolate NTP traffic from the effects of other network traffic? I'm not so sure that the kernel implements such priorities all the way through. We opted for two physically distinct LANs, each with their own NIC card, plugged into different processors of the enterprise server class machine that needed the time. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] IEEE 1588 (PTP) at the nanosecond level?
In article i3glva-5vm@ubuntu-server-1.py.meinberg.de, Martin Burnicki martin.burni...@meinberg.de wrote: Joe Gwinn wrote: I've used IRIG-B004 DCLS before, for cables two meters long within a cabinet. Worked well. How well do they handle 100 meter cables, in areas where the concept of ground can be elusive? You could use fiber optics to transfer an IRIG DCLS signal. Now that you mention it, I do recall that option. However, if you want highest accuracy you need to take care how much delay is inserted by the transceivers, and the length of the connection. PTP has the advantage that constant delays can be measured and compensated automatically. In any case it's a good thing if you can measure the accuracy, e.g. compare a 1 PPS slope generated by a time client to a 1 PPS slope generated by the device providing the time. This helps to find out if the time transfer suffers from uncompensated delays (IRIG) or asymmetries (PTP). IOne can measure the delay and command the IRIG receiver card to compensate. The original issue was to be able to drop IRIG support in honor of PTP via the ethernet infrastructure we always need. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] IEEE 1588 (PTP) at the nanosecond level?
In article 5328ad2...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: On 18/03/14 01:24, Joe Gwinn wrote: In article 532778bf.50...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: On 17/03/14 13:50, Joe Gwinn wrote: In article lg61s4$ong$3...@dont-email.me, William Unruh un...@invalid.ca wrote: On 2014-03-16, Joe Gwinn joegw...@comcast.net wrote: I keep seeing claims that Precision Time Protocol (IEEE 1588-2008) can achieve sub-microsecond to nanosecond-level synchronization over ethernet (with the right hardware to be sure). I've been reading IEEE 1588-2008, and they do talk of one nanosecond, but that's the standard, and aspirational paper is not practical hardware running in a realistic system. 1ns is silly. However 10s of ns are possible. It is achieved by Radio Astronomy networks with special hardware (but usually post facto) IEEE 1588-2008 does say one nanosecond, in section 1.1 Scope. I interpret it as aspirational - one generally makes a hardware standard somewhat bigger and better than current practice, so the standard won't be too soon outgrown. IEEE standards time out in five years, unless revised or reaffirmed. I've seen some papers reporting tens to hundreds of nanoseconds average sync error, but for datasets that might have 100 points, and even then there are many outliers. I'm getting PTP questions on this from hopeful system designers. These systems already run NTP, and achieve millisecond level sync errors. Uh, perhaps show them to achievement of microsecond level sync errors? That is already a factor of 1000 better than they achieve. I forgot to mention a key point. We also have IRIG hardware, which does provide microsecond level sync errors. The hope is to eliminate the IRIG hardware by using the ethernet network that we must have anyway. IRIG-B004 DCLS can provide really good performance if you let it. To get *good* PTP performance, comparable to your IRIG-B, prepare to do a lot of testing to find the right Ethernet switches, and then replace them all. Redoing the IRIG properly start to look like cheap and straight-forward. I've used IRIG-B004 DCLS before, for cables two meters long within a cabinet. Worked well. How well do they handle 100 meter cables, in areas where the concept of ground can be elusive? The rising edge of the 100 Hz is your time reference, the falling edges is your information. Proper signal conditioning and cabling should not be a problem given proper drivers and receivers. IRIG-B004 DCLS also travels nicely over optical connections, and grounding issues will be less of a problem. Known to work well in power sub-stations, so there can be off the shelf products if you look for them. That's a pretty severe environment. I should give more context: On ships at full steam, there can be a steady seven volts rms or so at power frequency (and harmonics) between bow and stern, which will cause large currents to flow in the shield. This is well below the frequency at which inside and outside shield currents become decoupled due to skin effect, so the full voltage drop in the shield may be seen on the center conductor. We use optical links a lot, and triax some. One can also make RF boxes largely immune with a DC-block capacitor in series with the center conductor. This is for proposed new systems, so there are no switches to replace. In response to questions from hopeful engineers, I had already made the point about the need for serious testing, with asymmetrical loads a factor larger than the real system will sustain. I'm not sure they are convinced of the need. Anyway, the hope is that PTP will be simpler and cheaper than having multiple IRIG systems, assuming that one starts from scratch. Maybe, depends on your needs. Consider doing a separate network for PTP. That approach have been used in systems where you want to make sure it works. That fails economically - might as well stick to IRIG. One of the key problems is getting the packets onto the network (delays withing the ethernet card) special hardware on the cards which timestamps the sending and receiveing of packets on both ends could do better. But it also depends on the routers and switches between the two systems. Yes. My question is basically a query about the current state of the art [in PTP]. The state of the art is not yet standard and not yet off the shelf products, if you want to call it PTP. This is my fear and instinct. But people read the adverts and will continue to ask. And some customers will demand. So, I'm digging deeper. Are there any good places to start? You asked here, it's not the worst place to start. :) To be sure. There is a truism in the standards world, that it take three major releases (versions) of a standard for it to achieve maturity
Re: [ntp:questions] IEEE 1588 (PTP) at the nanosecond level?
In article lg61s4$ong$3...@dont-email.me, William Unruh un...@invalid.ca wrote: On 2014-03-16, Joe Gwinn joegw...@comcast.net wrote: I keep seeing claims that Precision Time Protocol (IEEE 1588-2008) can achieve sub-microsecond to nanosecond-level synchronization over ethernet (with the right hardware to be sure). I've been reading IEEE 1588-2008, and they do talk of one nanosecond, but that's the standard, and aspirational paper is not practical hardware running in a realistic system. 1ns is silly. However 10s of ns are possible. It is achieved by Radio Astronomy networks with special hardware (but usually post facto) IEEE 1588-2008 does say one nanosecond, in section 1.1 Scope. I interpret it as aspirational - one generally makes a hardware standard somewhat bigger and better than current practice, so the standard won't be too soon outgrown. IEEE standards time out in five years, unless revised or reaffirmed. I've seen some papers reporting tens to hundreds of nanoseconds average sync error, but for datasets that might have 100 points, and even then there are many outliers. I'm getting PTP questions on this from hopeful system designers. These systems already run NTP, and achieve millisecond level sync errors. Uh, perhaps show them to achievement of microsecond level sync errors? That is already a factor of 1000 better than they achieve. I forgot to mention a key point. We also have IRIG hardware, which does provide microsecond level sync errors. The hope is to eliminate the IRIG hardware by using the ethernet network that we must have anyway. One of the key problems is getting the packets onto the network (delays withing the ethernet card) special hardware on the cards which timestamps the sending and receiveing of packets on both ends could do better. But it also depends on the routers and switches between the two systems. Yes. My question is basically a query about the current state of the art. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] IEEE 1588 (PTP) at the nanosecond level?
In article cakyj6kanol-pbm8d+kfcoceya6yi0chwphwgalxab8gowcg...@mail.gmail.com, Paul tik-...@bodosom.net wrote: On Mon, Mar 17, 2014 at 8:50 AM, Joe Gwinn joegw...@comcast.net wrote: Yes. My question is basically a query about the current state of the art. Some NTP offsets (output may look funny if formatted) clock1 looking at clock2 and clock3 (a Raspberry Pi). This suggests it can be as good as your IRIG system. Gig ethernet to Gig ethernet. ~22 days: N Min MaxMedian AvgStddev 244059 -0.098741335 0.019727433 9.586e-06 4.814598e-06 0.00038621792 Gig to Fast ~10 days N Min MaxMedian AvgStddev 112254 -0.000516264 0.000453913 1.127e-06 6.8736914e-06 4.2248166e-05 People are also lusting after sub-microsecond sync. It isn't enough to have good averages; the excursions also matter - the 19.7 millisecond max would be a killer. I've gotten NTPv3 to sync two slow Solaris boxes over 10 Mbit thicknet to about 7 microseconds rms, but it was in a isolated lab setup. The slightest random load in the Solaris boxes would have gotten us back to millisecond scale. But my question is about the state of the art in PTP systems, not systems in general. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] IEEE 1588 (PTP) at the nanosecond level?
In article 532778bf.50...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: On 17/03/14 13:50, Joe Gwinn wrote: In article lg61s4$ong$3...@dont-email.me, William Unruh un...@invalid.ca wrote: On 2014-03-16, Joe Gwinn joegw...@comcast.net wrote: I keep seeing claims that Precision Time Protocol (IEEE 1588-2008) can achieve sub-microsecond to nanosecond-level synchronization over ethernet (with the right hardware to be sure). I've been reading IEEE 1588-2008, and they do talk of one nanosecond, but that's the standard, and aspirational paper is not practical hardware running in a realistic system. 1ns is silly. However 10s of ns are possible. It is achieved by Radio Astronomy networks with special hardware (but usually post facto) IEEE 1588-2008 does say one nanosecond, in section 1.1 Scope. I interpret it as aspirational - one generally makes a hardware standard somewhat bigger and better than current practice, so the standard won't be too soon outgrown. IEEE standards time out in five years, unless revised or reaffirmed. I've seen some papers reporting tens to hundreds of nanoseconds average sync error, but for datasets that might have 100 points, and even then there are many outliers. I'm getting PTP questions on this from hopeful system designers. These systems already run NTP, and achieve millisecond level sync errors. Uh, perhaps show them to achievement of microsecond level sync errors? That is already a factor of 1000 better than they achieve. I forgot to mention a key point. We also have IRIG hardware, which does provide microsecond level sync errors. The hope is to eliminate the IRIG hardware by using the ethernet network that we must have anyway. IRIG-B004 DCLS can provide really good performance if you let it. To get *good* PTP performance, comparable to your IRIG-B, prepare to do a lot of testing to find the right Ethernet switches, and then replace them all. Redoing the IRIG properly start to look like cheap and straight-forward. I've used IRIG-B004 DCLS before, for cables two meters long within a cabinet. Worked well. How well do they handle 100 meter cables, in areas where the concept of ground can be elusive? This is for proposed new systems, so there are no switches to replace. In response to questions from hopeful engineers, I had already made the point about the need for serious testing, with asymmetrical loads a factor larger than the real system will sustain. I'm not sure they are convinced of the need. Anyway, the hope is that PTP will be simpler and cheaper than having multiple IRIG systems, assuming that one starts from scratch. One of the key problems is getting the packets onto the network (delays withing the ethernet card) special hardware on the cards which timestamps the sending and receiveing of packets on both ends could do better. But it also depends on the routers and switches between the two systems. Yes. My question is basically a query about the current state of the art [in PTP]. The state of the art is not yet standard and not yet off the shelf products, if you want to call it PTP. This is my fear and instinct. But people read the adverts and will continue to ask. And some customers will demand. So, I'm digging deeper. Are there any good places to start? Thanks, Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] Asymmetric Delay and NTP
In article 5327757e.5040...@rubidium.dyndns.org, Magnus Danielson mag...@rubidium.dyndns.org wrote: Joe, On 16/03/14 23:16, Joe Gwinn wrote: I recall seeing something from Dr. Mills saying that a formal proof had been found showing that no packet-exchange protocol (like NTP) could tell delay asymmetry from clock offset. Can anyone provide a reference to this proof? It's relative simple. You have two nodes (A and B) and a link in each direction (A-B and B-A). You have three unknowns, the time-difference between the nodes (T_B - T_A), the delay from node A to B (d_AB) and the delay from node B to A (d_BA). You make two observations of the pseudo-range from node A to node B (t_AB) and from node B to node A (t_BA). These are made by the source announcing it's time and the receiver time-stamping in it's own time when it occurs. t_AB = T_B - T_A + d_AB t_BA = T_A - T_B + d_BA We thus have three unknowns and two equations. You can't solve that. For each link you add, you add one observation and one unknown. For each node and two links you add, you add three unknowns and two observations. You can't win this game. There are things you can do. Let's take out observations and add them, then we get RTT = t_AB + t_BA = (T_B - T_A) + d_AB + (T_A - T_B) + d_BA = d_AB + d_BA Now, that is useful. If we diff them we get /|T = t_AB - t_BA = (T_B - T_A) + d_AB - (T_A - T_B) - d_BA = 2(T_B - T_A) + d_AB - d_BA TE = /|T / 2 = T_B - T_A + (d_AB - d_BA)/2 So, diffing them gives the time-difference, plus half the asymmetric delay. If we assume that the delay is symmetric, then we can use these measures to compute the time-difference between the clocks, and if there is an asymmetry, it *will* show up as a bias in the slave clock. The way to come around this is to either avoid asymmetries like the plague, find a means estimate them (PTPv2) or calibrate them away. Is that formal enough for you? It may be. This I did know, and would seem to suffice, but I recall a triumphant comment from Dr. Mills in one of his documentation pieces. Which I cannot recall well enough to find. It may be the above analysis that was being referred to, or something else. I also took the next step, which is to treat d_AB and d_BA as random variables with differing means and variances (due to interference from asymmetrical background traffic), and trace this to the effect on clock sync. It isn't pretty on anything like a nanosecond scale. The required level of isolation between PTP traffic and background traffic is quite stringent. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
[ntp:questions] Asymmetric Delay and NTP
I recall seeing something from Dr. Mills saying that a formal proof had been found showing that no packet-exchange protocol (like NTP) could tell delay asymmetry from clock offset. Can anyone provide a reference to this proof? Thanks, Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
[ntp:questions] IEEE 1588 (PTP) at the nanosecond level?
I keep seeing claims that Precision Time Protocol (IEEE 1588-2008) can achieve sub-microsecond to nanosecond-level synchronization over ethernet (with the right hardware to be sure). I've been reading IEEE 1588-2008, and they do talk of one nanosecond, but that's the standard, and aspirational paper is not practical hardware running in a realistic system. I've seen some papers reporting tens to hundreds of nanoseconds average sync error, but for datasets that might have 100 points, and even then there are many outliers. I'm getting PTP questions on this from hopeful system designers. These systems already run NTP, and achieve millisecond level sync errors. Anyway, how much truth is there to all this? Are there any papers I should read? Thanks, Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] NTPD silently not tracking
In article kvre7q$j89$1...@dont-email.me, E-Mail Sent to this address will be added to the BlackLists Null@BlackList.Anitech-Systems.invalid wrote: Magnus Danielson wrote: BlackLists wrote: What ntpdc commands did you issue, and what results did you get? Unfortunatly no. I got the call after the fact, but lack of remote login due to time error would prohibit me from doing anything anyway. The server needed to be operational rather than optimize for NTP debugging. What ntpdc commands did the other people issue? Kinda hard to try and duplicate / troubleshoot with no real info, except its broke. Have you tried a newer version of NTP ? No, I listed the affected version as packaged by Debian. Supposing there is an real issue, perhaps it has already been fixed in a more recent version. It has 2 stratum 1 and 3 stratum 2 unicast servers configured. NTP wise this machine is a client with 5 configured servers. The problem was that it was way off time with no apparent indication, which is wrong. Don't use Undisciplined Local Clock 127.127.1.0 It can run away all by itself, and there is nothing wrong with that. If that is a issue change to orphan. Provide all information necessary to duplicate troubleshoot the issue? ntp.conf, (obstruficate as necessary); ntpdc commands issued to monitor the server. Without those, I don't think anyone can hope to guess if there really is a issue, or even begin to troubleshoot the issue if it does exist. I've experienced total failure to discipline the clock twice, each with its own cause. The first was that the sysadmin forgot to give the NTP daemon sufficient privilege, so the OS kernel silently ignored the commands to tick a little faster/slower. The second was when we had a version of Solaris that had a known kernel bug. In both cases, the symptom is that local time steadily diverged from that of the timeserver, with nary a complaint from NTP. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] irig-b LEVELS
In article 6c1fe0e5a70c9640b597f869a899c017033c0...@tmpexch.non-stop.com.au, Mark C. Stephens ma...@non-stop.com.au wrote: I have a Datum 9390 and the Thin client I want to feed the IRIG signal into only has microphone level input. Looking at the signal on a scope, unterminated its 33v peak to peak with about a 20v peak to peak 1KHz carrier into an unterminated load. Levels go down to about 1.2v into 100K load. Can anyone tell me if there is a standard level (preferably in dB) for IRIG-B signal out of a receiver? There is a wide amplitude range, to account for attenuation and loading from multiple receivers. Find a copy of the IRIG standard. Start here: http://en.wikipedia.org/wiki/IRIG_timecode. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] NTP with GPS and RTC
In article T8Tet.8216$zh4.3...@newsfe13.iad, unruh un...@invalid.ca wrote: On 2013-04-27, David Taylor david-tay...@blueyonder.co.uk.invalid wrote: On 27/04/2013 09:13, David Woolley wrote: [] If the time error is low and bounded the frequency error does not necessarily have a low error bound. Simply polling fast and using a short loop time constant can give you a low error bound on the time, if the network delays are stable and the source is good. Thanks, David. I just checked a power-down reboot of a Windows PC (for replacement of a faulty card) with a PPS reference source, and the jitter was down to its normal value within 20 minutes, and the offset within 15 minutes. It was much more difficult to judge the time for the frequency to stabilise, because there is so much variation with temperature - perhaps it took 2 to 2.5 hours. Certainly not then 10 hours Bill has often stated. Another warm PC which had a reboot this morning showed a 12 minute jitter recovery time, 3 minutes for the offset to recover (about 50 microseconds typical) and you couldn't see any step in the frequency graph. Try altering the ntpd drift file (eg representing say 2 days of outage, not 10 seconds or what used to happen is that Linux's boot recalibration of the clock would jump by 10s of PPM on each reboot). Subract 20 from that value and then start it again. See www.theory.physics.ubc.ca/chrony/chrony.conf Bad link. Try http://www.theory.physics.ubc.ca/chrony/chrony.html. Joe Gwinn for some test of ntp (towards the bottom of the page). Note that I was asking for microsecond accuracy, and ntpd fixes things at a rate of about a factor of 2 in an hour and a half-- depending on the polling. If the polling is at 10 ( which I agree it is not on startup), ntp uses measurements at a rate of about one every 2-3 hours. And on each measurement there is only a small adjustment in the rate. That is much longer than 10 hours to recover from a change. ntpd, despite Mill's constant protestations that he is totally uninterested in the speed with which ntpd approaches equilibrium, has done a few things in the past 5-10 years to speed it up in its startup operation. And it's 128ms jumps in time (Infinite drift correction) do quite a bit to hide its slow convergence rate. ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] help
In article slrnkjcdk1.vot.koste...@stasis.kostecke.net, Steve Kostecke koste...@ntp.org wrote: On 2013-03-04, 1900116857 1900116...@qq.com wrote: My OS is Ubuntu12.04amd64 NTP 4.2.6p3 is packaged for Ubuntu precise: http://packages.ubuntu.com/precise/ntp It is available for installation from the Ubuntu package repositories. You should be able to see the whether or not the ntp package is installed using the following command: $ dpkg -l ntp You may want to check and see if your system already has the ntp package provided by your OS installed. I dowloaded the NTP4.2.6p5 package and installed it with following commands: configure make make install Installation seems successful. No error is reported. But there still some other question. First : when I typed in services ntpd start,OS shows unrecognized service The NTP Reference Implementation source code releases, which are linked from www.ntp.org/downloads.html and support.ntp.org/download, do not install any initialization scripts as these are OS specific. In general you are better off installing, and using, ntp from your OS package management system. ,but udp port 123 is active.Before this,I tried to runsudo ntpd,and succeeded. 'sudo ntpd' starts the NTP daemon (assuming that it in your search path). The netstat output you included in your original article shows that ntpd was running. But did the daemon have sufficient privilege to control the local clock? I've seen this kind of problem when the startup script didn't give the daemon the ability to steer the clock - everything looks normal, except the time diverges. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] multiple instances of NTP on different interfaces
In article CAD678-DQ-nMVJP5EPsb+0i699S_VrDsB2yzNkE4c=Btv=ny...@mail.gmail.com, Abu Abdullah falcon.sh...@gmail.com wrote: option to disable adjusting the system clock? I believe there is, but that instance would become a pure server. The time that ntpd serves is always that in the local system clock. I would appreciate if you can provide it so at least i can get rid of these warnings. As someone already said, you need explain the overall goal, not the particular step that you think might achieve it. We have a requirement for NTP service for two different networks: public (not important, can have outages), private (important). we are trying to have separate process for each network in case high load come from the public domain (or for any security issue). We will have more control on the public NTP where we can set the resources for it at the OS level. in addition, at any point of time we can migrate the private NTP to a dedicated machine (currently we have only one machine) once the hardware is not capable to handle both. In this case we will not have to change the NTP IPs in the clients configurations (private). Be aware that if the hope is that the private network be immune to hacking from the public network, or immune to leakage of information from private to public, there cannot be a computer common to both networks. There are hardware solutions to this dilemma, specifically GPS receivers with built-in isolated NTP servers, each server with its own dedicated ethernet port. Joe Gwinn ___ questions mailing list questions@lists.ntp.org http://lists.ntp.org/listinfo/questions
Re: [ntp:questions] What exactly does Maximum Distance Exceded mean?
At 10:38 PM -0400 3/15/09, Danny Mayer wrote: Joseph Gwinn wrote: What's the story for IBM's AIX? It builds on AIX too. It builds on most Unix systems though maybe not on some of the oldest O/S versions. Great. I don't know why IBM doesn't have NTPv4 on their AIX boxes, as IBM offers NTPv4 on their Red Hat Linux boxes, but I can guess that NTPv4 is what Red Hat happens to provide. Joe ___ questions mailing list questions@lists.ntp.org https://lists.ntp.org/mailman/listinfo/questions
Re: [ntp:questions] What exactly does Maximum Distance Exceded mean?
At 11:19 PM -0400 3/15/09, Danny Mayer wrote: Joseph Gwinn wrote: The FAQ has to be the place for such explanations. I'm not sure if this qualifies as an FAQ as I don't recall that it has come up before. FAQ stands for Frequently Asked Questions. RAQ then? Rarely Asked Questions Seriously, I can't believe that I'm the only person in history to be perplexed by these status codes, and those little three-word summaries are a bit telegraphic. Joe Gwinn You aren't the only one. These questions have been asked before by a number of people. In fact I had to look at this at one point when I was getting these codes. Of course I just looked at the source code and never looked for documentation. I will tell you that this is a combination of bits so it's not just a number. Each bit represents a test code that failed so you have quite a bit to look at. I do know how the status code is structured, and wrote a Mathematica program to automate the decoding. (I use Mathematica to generate the co-plots of loopstats and peerstats data, collect statistics, et al.) What I didn't know was that the definitions of the code bits had changed between v3 and v4. I'll have to dig into the old documentation and see if this code was affected. There is little chance that I will have the time to read enough NTP source code to make sense of it, sufficient to be able to come to reliable conclusions. I'm a system engineer, and time is one part of many in the system. More generally, it's hopeless to expect the world's sysadmins to read NTP code (or any other kind of code). They just don't have the time, and are responsible for far too many different kinds of box for it to be practical. But a major part of making something reliable in practice is making it possible for a harried sysadmin to nonetheless get it right. (I'm not a sysadmin, but work with many sysadmins. They spend lots of time fighting fires, and are of necessity jacks of all trades, masters of none.) Silently mutating code definitions sounds like a blunder to me. NTP is used on tens to hundreds of millions of computers worldwide. There will never be a pure v4 world. In fact there will still be v3 around when v5 is being introduced. So, if new kinds of status is needed, invent new codes to suit, but do not change the meanings of the codes that are already widely used. In other words, do not undermine your existing base. The Internet folk had the same issue with IPv6, and they concluded that IPv4 was too deeply embedded to ever eliminate, and that there was never going to be a flag day when a worldwide changeover would happen. Thus, IPv4 and IPv6 had to coexist and interoperate forever, and so IPv6 was designed to support this. Joe ___ questions mailing list questions@lists.ntp.org https://lists.ntp.org/mailman/listinfo/questions
Re: [ntp:questions] What exactly does Maximum Distance Exceded mean?
Status code values fixed. At 10:47 PM -0400 3/15/09, Danny Mayer wrote: Joseph Gwinn wrote: Hmm. OK, but I think that we've kind of run off the rails. Let me summarize: 1. Sun Microsystems' current behavior is not the issue, as I'm loading old software from an old CD onto old computer hardware, hardware that cannot support a newer version of Solaris than v9. One of these old Solaris boxes did work with NTPv3 running an even older version of Solaris, with no 9514 codes, deepening the mystery. The trouble here is that those codes are *very likely* likely to have changed between V3 and V4 since there was a large rewrite between the two. That's why looking at the source code is necessary to get you the help you need. As discussed in my other reply, mutating codes is a blunder. It's a good news bad news thing. The good news is that NTP has succeeded on an unimagined scale. The bad news is that because of that scale, one must be *very* respectful of NTP's existing base, and it can be constraining. The fact that this obsolete system can most likely support NTPv4 is worth investigation, though. 2. I think that what's happening is that I'm doing something dumb, and I bet that there is no real difference in how NTPv3 or NTPv4 would react to this faux pas, whatever it turns out to be. Nor is source code research needed or requested. 3. The original question was how to interpret a specific status code, 9514. I read the explanation in the documentation, but became no wiser for it. Thus my question. Which is why you need to look at the source code. Documentation isn't always clear or definitive but the source code will tell you. It simply cannot be required to read source code to get the definitions of status codes, even if the documentation has to give one definition per NTP version. NTP is used on hundreds of millions of computers. Are we expecting that every time someone gets an unexpected code they either have to read the source code, or pay someone to read it for them? I'm sorry, but that cannot work. If there isn't a NTP FAQ entry on this, there probably should be. Our sysadmins were flummoxed by the cloud of 9514 codes, and they are far too busy to undertake a research project. (The deeper problem is that some managers believe that NTP is plug and play, which isn't quite true.) Mostly it is, but there are always mysteries like this. Yes. Joe ___ questions mailing list questions@lists.ntp.org https://lists.ntp.org/mailman/listinfo/questions