[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-16 Thread Bob kb8tq
Hi

On most “normal” home connections to the internet, your trip 
time “up” to something will be much longer than your trip time
“back” from the same destination. Part of this is simply a reflection
of download speed being much better than upload speed. However
as noted in other posts, there is a *lot* more to it. 

Bob

> On Dec 15, 2021, at 11:43 AM, Adam Space  wrote:
> 
> Good idea. Doing so reveals the expected outcome from the wedge plot:
> variable forward path delay, shifted in the positive direction, and a
> pretty stable negative path delay. Is this the norm for consumer grade
> connection? It seems to be for me.
> 
> On Wed, Dec 15, 2021 at 10:53 AM Magnus Danielson via time-nuts <
> time-nuts@lists.febo.com> wrote:
> 
>> Hi,
>> 
>> Expect network routes to be more dispersed these days, as it is needed.
>> 
>> While the wedge plot is a classic for NTP, it may be interesting to plot
>> forward and backward path histograms independently.
>> 
>> Cheers,
>> Magnus
>> 
>> On 2021-12-15 16:25, Adam Space wrote:
>>> Yeah I think it is localized. Network paths have been quite variable for
>>> me. Every once in a while I start getting massive delays from the NIST
>>> servers to my system, resulting in results like yours.
>>> 
>>> Interestingly though, time-e-g was one of the only servers that didn't
>> have
>>> this problem for me. This is a recent wedge plot for it. seems to be
>>> working fine for me now, just with a variable outgoing delay causing
>>> positive offsets, which seems to be more of a problem with my connection
>>> than anything else.
>>> 
>>> On Tue, Dec 14, 2021 at 9:04 PM K5ROE Mike  wrote:
>>> 
 On 12/14/21 5:23 PM, Hal Murray wrote:
>> Out of curiosity, since you monitor NIST Gaithersburg, if you were to
 average
>> over the offsets for a whole month, what kind of value would you get?
 Surely
>> it is close to zero but I am curious how close. Within 1ms?
> It depends.  Mostly on the routing between you and NIST.  If you are
 closer,
> the routing is more likely to be symmetric.
> 
>  From my experience, routing is generally stable on the scale of
 months.  There
> are short (hours) changes when a fiber gets cut or a router gets
>> busted.
> There are long term changes as people add fibers and/or change business
 deals.
> There are some cases where a stable routing will produce 2 answers: x%
 of the
> packets will be slightly faster/slower than most of them.  I think
>> what's
> going on is that the routers are doing load sharing on multiple paths,
 hashing
> on the address+port.  Or something like that.  So it's a roll of the
>> dice
> which path you get.
> 
> 
> 
> I'm in California.
> 
> NIST has NTP servers at 3 locations in the Boulder CO area: NIST, WWV,
 and
> Univ of Colorado.  (Google maps says WWV is 60 miles north of Bouler.
 Univ of
> Colorado is a few miles from NIST.)
> 
>  From a cheap cloud server (Digital Ocean) in San Francisco, the RTT
>> to
 NIST is
> 31.5 ms, to WWV is 32.1 ms, to Univ of Colorado is 54.5 ms.  The time
 offsets
> are about 1 ms for NIST and WWV and 12 ms for Univ of Colorado.
> 
>  From my home (AT via Sonic), 30 miles south of San Francisco, the
 RTTs are
> 61 ms for NIST and WWV and 81-82 for Univ of Colorado.  Offsets are 6-7
 ms for
> NIST and WWV and 4-5 ms in the other direction for Univ of Colorado.
> 
> 
 Might be a localized routing phenomenon.  Using my verizon connection
>> from
 Northern Virginia the results are awful for time-e-g.nist.gov:
 
   remote   refid  st t when poll reach   delay   offset
 jitter
 
 
>> ==
 -192.168.1.219   68.69.221.61 2 u   56   64  3770.400   -0.290
  0.035
 *192.168.1.224   .PPS.1 u1   16  3770.1840.087
  0.017
 -129.6.15.26 .NIST.   1 u   32   64  377   93.087  -37.940
  7.867
 
 However from my AWS machine in Oregon:
 
 MS Name/IP address Stratum Poll Reach LastRx Last sample
 
 
>> ===
 ^- 152.63.13.177 3   6   37763  -2011us[-2011us] +/-
 128ms
 ^+ 209.182.153.6 2   7   37765   -959us[ -959us] +/-
  86ms
 ^- 64.139.66.105 3   6   377   128  -5838us[-5838us] +/-
 134ms
 ^+ 129.6.15.26   1   6   37764  -2075us[-2075us] +/-
  37ms
 ^* 173.66.105.50 1   8   377   438   -448us[ -870us] +/-
  38ms
 
 
 -mike
 ___
 time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe
>> send
 an email to time-nuts-le...@lists.febo.com

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-16 Thread Magnus Danielson via time-nuts
In modern IP/MPLS routing, forward and backward paths is indivudally 
routed and becomes re-routed do spread traffic load or overcome loss of 
links. Analyzing it makes much more sense in this form. You can draw the 
same conclusion from the wedge-plot of TE vs RTT, but it may be less 
clear, so shift plot-form to match the problem.


At ITSF 2021 GMV had a presentation where they used NTP based on Chrony 
between two RPis which had good GNSS timing on both ends between two FTH 
accesses. They showed significant noise and asymmetry. In parallel they 
where using a different equipment which was able to chew way into the 
noise, and see other variations.


Cheers,
Magnus

On 2021-12-15 17:43, Adam Space wrote:

Good idea. Doing so reveals the expected outcome from the wedge plot:
variable forward path delay, shifted in the positive direction, and a
pretty stable negative path delay. Is this the norm for consumer grade
connection? It seems to be for me.

On Wed, Dec 15, 2021 at 10:53 AM Magnus Danielson via time-nuts <
time-nuts@lists.febo.com> wrote:


Hi,

Expect network routes to be more dispersed these days, as it is needed.

While the wedge plot is a classic for NTP, it may be interesting to plot
forward and backward path histograms independently.

Cheers,
Magnus

On 2021-12-15 16:25, Adam Space wrote:

Yeah I think it is localized. Network paths have been quite variable for
me. Every once in a while I start getting massive delays from the NIST
servers to my system, resulting in results like yours.

Interestingly though, time-e-g was one of the only servers that didn't

have

this problem for me. This is a recent wedge plot for it. seems to be
working fine for me now, just with a variable outgoing delay causing
positive offsets, which seems to be more of a problem with my connection
than anything else.

On Tue, Dec 14, 2021 at 9:04 PM K5ROE Mike  wrote:


On 12/14/21 5:23 PM, Hal Murray wrote:

Out of curiosity, since you monitor NIST Gaithersburg, if you were to

average

over the offsets for a whole month, what kind of value would you get?

Surely

it is close to zero but I am curious how close. Within 1ms?

It depends.  Mostly on the routing between you and NIST.  If you are

closer,

the routing is more likely to be symmetric.

   From my experience, routing is generally stable on the scale of

months.  There

are short (hours) changes when a fiber gets cut or a router gets

busted.

There are long term changes as people add fibers and/or change business

deals.

There are some cases where a stable routing will produce 2 answers: x%

of the

packets will be slightly faster/slower than most of them.  I think

what's

going on is that the routers are doing load sharing on multiple paths,

hashing

on the address+port.  Or something like that.  So it's a roll of the

dice

which path you get.



I'm in California.

NIST has NTP servers at 3 locations in the Boulder CO area: NIST, WWV,

and

Univ of Colorado.  (Google maps says WWV is 60 miles north of Bouler.

Univ of

Colorado is a few miles from NIST.)

   From a cheap cloud server (Digital Ocean) in San Francisco, the RTT

to

NIST is

31.5 ms, to WWV is 32.1 ms, to Univ of Colorado is 54.5 ms.  The time

offsets

are about 1 ms for NIST and WWV and 12 ms for Univ of Colorado.

   From my home (AT via Sonic), 30 miles south of San Francisco, the

RTTs are

61 ms for NIST and WWV and 81-82 for Univ of Colorado.  Offsets are 6-7

ms for

NIST and WWV and 4-5 ms in the other direction for Univ of Colorado.



Might be a localized routing phenomenon.  Using my verizon connection

from

Northern Virginia the results are awful for time-e-g.nist.gov:

remote   refid  st t when poll reach   delay   offset
jitter



==

-192.168.1.219   68.69.221.61 2 u   56   64  3770.400   -0.290
   0.035
*192.168.1.224   .PPS.1 u1   16  3770.1840.087
   0.017
-129.6.15.26 .NIST.   1 u   32   64  377   93.087  -37.940
   7.867

However from my AWS machine in Oregon:

MS Name/IP address Stratum Poll Reach LastRx Last sample



===

^- 152.63.13.177 3   6   37763  -2011us[-2011us] +/-
128ms
^+ 209.182.153.6 2   7   37765   -959us[ -959us] +/-
   86ms
^- 64.139.66.105 3   6   377   128  -5838us[-5838us] +/-
134ms
^+ 129.6.15.26   1   6   37764  -2075us[-2075us] +/-
   37ms
^* 173.66.105.50 1   8   377   438   -448us[ -870us] +/-
   38ms


-mike
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe

send

an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


___
time-nuts mailing list -- 

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-16 Thread Adam Space
Good idea. Doing so reveals the expected outcome from the wedge plot:
variable forward path delay, shifted in the positive direction, and a
pretty stable negative path delay. Is this the norm for consumer grade
connection? It seems to be for me.

On Wed, Dec 15, 2021 at 10:53 AM Magnus Danielson via time-nuts <
time-nuts@lists.febo.com> wrote:

> Hi,
>
> Expect network routes to be more dispersed these days, as it is needed.
>
> While the wedge plot is a classic for NTP, it may be interesting to plot
> forward and backward path histograms independently.
>
> Cheers,
> Magnus
>
> On 2021-12-15 16:25, Adam Space wrote:
> > Yeah I think it is localized. Network paths have been quite variable for
> > me. Every once in a while I start getting massive delays from the NIST
> > servers to my system, resulting in results like yours.
> >
> > Interestingly though, time-e-g was one of the only servers that didn't
> have
> > this problem for me. This is a recent wedge plot for it. seems to be
> > working fine for me now, just with a variable outgoing delay causing
> > positive offsets, which seems to be more of a problem with my connection
> > than anything else.
> >
> > On Tue, Dec 14, 2021 at 9:04 PM K5ROE Mike  wrote:
> >
> >> On 12/14/21 5:23 PM, Hal Murray wrote:
>  Out of curiosity, since you monitor NIST Gaithersburg, if you were to
> >> average
>  over the offsets for a whole month, what kind of value would you get?
> >> Surely
>  it is close to zero but I am curious how close. Within 1ms?
> >>> It depends.  Mostly on the routing between you and NIST.  If you are
> >> closer,
> >>> the routing is more likely to be symmetric.
> >>>
> >>>   From my experience, routing is generally stable on the scale of
> >> months.  There
> >>> are short (hours) changes when a fiber gets cut or a router gets
> busted.
> >>> There are long term changes as people add fibers and/or change business
> >> deals.
> >>> There are some cases where a stable routing will produce 2 answers: x%
> >> of the
> >>> packets will be slightly faster/slower than most of them.  I think
> what's
> >>> going on is that the routers are doing load sharing on multiple paths,
> >> hashing
> >>> on the address+port.  Or something like that.  So it's a roll of the
> dice
> >>> which path you get.
> >>>
> >>> 
> >>>
> >>> I'm in California.
> >>>
> >>> NIST has NTP servers at 3 locations in the Boulder CO area: NIST, WWV,
> >> and
> >>> Univ of Colorado.  (Google maps says WWV is 60 miles north of Bouler.
> >> Univ of
> >>> Colorado is a few miles from NIST.)
> >>>
> >>>   From a cheap cloud server (Digital Ocean) in San Francisco, the RTT
> to
> >> NIST is
> >>> 31.5 ms, to WWV is 32.1 ms, to Univ of Colorado is 54.5 ms.  The time
> >> offsets
> >>> are about 1 ms for NIST and WWV and 12 ms for Univ of Colorado.
> >>>
> >>>   From my home (AT via Sonic), 30 miles south of San Francisco, the
> >> RTTs are
> >>> 61 ms for NIST and WWV and 81-82 for Univ of Colorado.  Offsets are 6-7
> >> ms for
> >>> NIST and WWV and 4-5 ms in the other direction for Univ of Colorado.
> >>>
> >>>
> >> Might be a localized routing phenomenon.  Using my verizon connection
> from
> >> Northern Virginia the results are awful for time-e-g.nist.gov:
> >>
> >>remote   refid  st t when poll reach   delay   offset
> >> jitter
> >>
> >>
> ==
> >> -192.168.1.219   68.69.221.61 2 u   56   64  3770.400   -0.290
> >>   0.035
> >> *192.168.1.224   .PPS.1 u1   16  3770.1840.087
> >>   0.017
> >> -129.6.15.26 .NIST.   1 u   32   64  377   93.087  -37.940
> >>   7.867
> >>
> >> However from my AWS machine in Oregon:
> >>
> >> MS Name/IP address Stratum Poll Reach LastRx Last sample
> >>
> >>
> ===
> >> ^- 152.63.13.177 3   6   37763  -2011us[-2011us] +/-
> >> 128ms
> >> ^+ 209.182.153.6 2   7   37765   -959us[ -959us] +/-
> >>   86ms
> >> ^- 64.139.66.105 3   6   377   128  -5838us[-5838us] +/-
> >> 134ms
> >> ^+ 129.6.15.26   1   6   37764  -2075us[-2075us] +/-
> >>   37ms
> >> ^* 173.66.105.50 1   8   377   438   -448us[ -870us] +/-
> >>   38ms
> >>
> >>
> >> -mike
> >> ___
> >> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe
> send
> >> an email to time-nuts-le...@lists.febo.com
> >> To unsubscribe, go to and follow the instructions there.
> >>
> >>
> >> ___
> >> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe
> send an email to time-nuts-le...@lists.febo.com
> >> To unsubscribe, go to and follow the instructions there.
> ___
> time-nuts mailing list -- time-nuts@lists.febo.com -- To 

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-15 Thread Bob kb8tq
Hi

One really big thing that has changed is the number of folks doing this
sort of thing via a dsl or cable modem that has > 20 ms of asymmetry. 
You will not find many NTP papers studying that sort of network connection. 

Bob

> On Dec 15, 2021, at 11:30 AM, Lux, Jim  wrote:
> 
> On 12/15/21 7:53 AM, Magnus Danielson via time-nuts wrote:
>> Hi,
>> 
>> Expect network routes to be more dispersed these days, as it is needed.
>> 
>> While the wedge plot is a classic for NTP, it may be interesting to plot 
>> forward and backward path histograms independently.
>> 
>> Cheers,
>> Magnus 
> 
> 
> I assume someone, somewhere has run some recent tests and maybe published 
> them. All those plots and behaviors from the early days of NTP might have 
> significantly changed, due to the plethora of new kinds of network routes.  
> Two things strike me as being "very different" from, say, 10-20 years ago - 
> 20 years ago, most routers were "store and forward" - the entire packet would 
> be received, and then decoded, and sent onward.  These days, many routers 
> start sending the packet to the destination before the entire packet has been 
> received.  To do S would take too much memory with multi Gbps speeds and 
> long packets.  I recall being at a conference at least 10 years ago where 
> they were talking about the sophistication required in 10G routers - cut 
> through routing, adaptive equalization, etc.
> 
> The other thing that has changed is a modern diversity of kinds of networks. 
> 20 years ago, it was basically wired connections of some kind with 
> concentrators/deconcentrators/switches/routers - all of which have moderately 
> well defined latency and statistics.
> 
> Now, though, there's a lot of over the air (cell phones, WISP, 5,6,7G 
> nanocells injected surreptitiously - at least my neighbor claims that's what 
> they're doing).  The latency on a WiFi connection, in a busy environment - 
> It's 8PM, and all the neighbors are streaming "The Wheel of Time" 
> (appropriately, for time-nuts) - varies wildly over a short time. (I will say 
> that WiFi latency improves dramatically during a power failure in a 
> residential neighborhood when you have backup power, and your neighbors do 
> not)
> 
> Imagine NTP running over Starlink, especially when there are multi hop 
> crosslinks between satellites.  At 7 km/s orbital velocity, the range is 
> changing as much as 21 microseconds/second to a "stationary" observer.  Now 
> consider two satellites in different orbital planes. The dynamics of the 
> latency get quite complex.
> 
> ___
> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
> email to time-nuts-le...@lists.febo.com
> To unsubscribe, go to and follow the instructions there.
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-15 Thread Lux, Jim

On 12/15/21 7:53 AM, Magnus Danielson via time-nuts wrote:

Hi,

Expect network routes to be more dispersed these days, as it is needed.

While the wedge plot is a classic for NTP, it may be interesting to 
plot forward and backward path histograms independently.


Cheers,
Magnus 



I assume someone, somewhere has run some recent tests and maybe 
published them. All those plots and behaviors from the early days of NTP 
might have significantly changed, due to the plethora of new kinds of 
network routes.  Two things strike me as being "very different" from, 
say, 10-20 years ago - 20 years ago, most routers were "store and 
forward" - the entire packet would be received, and then decoded, and 
sent onward.  These days, many routers start sending the packet to the 
destination before the entire packet has been received.  To do S would 
take too much memory with multi Gbps speeds and long packets.  I recall 
being at a conference at least 10 years ago where they were talking 
about the sophistication required in 10G routers - cut through routing, 
adaptive equalization, etc.


The other thing that has changed is a modern diversity of kinds of 
networks. 20 years ago, it was basically wired connections of some kind 
with concentrators/deconcentrators/switches/routers - all of which have 
moderately well defined latency and statistics.


Now, though, there's a lot of over the air (cell phones, WISP, 5,6,7G 
nanocells injected surreptitiously - at least my neighbor claims that's 
what they're doing).  The latency on a WiFi connection, in a busy 
environment - It's 8PM, and all the neighbors are streaming "The Wheel 
of Time" (appropriately, for time-nuts) - varies wildly over a short 
time. (I will say that WiFi latency improves dramatically during a power 
failure in a residential neighborhood when you have backup power, and 
your neighbors do not)


Imagine NTP running over Starlink, especially when there are multi hop 
crosslinks between satellites.  At 7 km/s orbital velocity, the range is 
changing as much as 21 microseconds/second to a "stationary" observer.  
Now consider two satellites in different orbital planes. The dynamics of 
the latency get quite complex.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-15 Thread Lux, Jim

On 12/15/21 7:53 AM, Magnus Danielson via time-nuts wrote:

Hi,

Expect network routes to be more dispersed these days, as it is needed.

While the wedge plot is a classic for NTP, it may be interesting to 
plot forward and backward path histograms independently.


Cheers,
Magnus 



I assume someone, somewhere has run some recent tests and maybe 
published them. All those plots and behaviors from the early days of NTP 
might have significantly changed, due to the plethora of new kinds of 
network routes.  Two things strike me as being "very different" from, 
say, 10-20 years ago - 20 years ago, most routers were "store and 
forward" - the entire packet would be received, and then decoded, and 
sent onward.  These days, many routers start sending the packet to the 
destination before the entire packet has been received.  To do S would 
take too much memory with multi Gbps speeds and long packets.  I recall 
being at a conference at least 10 years ago where they were talking 
about the sophistication required in 10G routers - cut through routing, 
adaptive equalization, etc.


The other thing that has changed is a modern diversity of kinds of 
networks. 20 years ago, it was basically wired connections of some kind 
with concentrators/deconcentrators/switches/routers - all of which have 
moderately well defined latency and statistics.


Now, though, there's a lot of over the air (cell phones, WISP, 5,6,7G 
nanocells injected surreptitiously - at least my neighbor claims that's 
what they're doing).  The latency on a WiFi connection, in a busy 
environment - It's 8PM, and all the neighbors are streaming "The Wheel 
of Time" (appropriately, for time-nuts) - varies wildly over a short 
time. (I will say that WiFi latency improves dramatically during a power 
failure in a residential neighborhood when you have backup power, and 
your neighbors do not)


Imagine NTP running over Starlink, especially when there are multi hop 
crosslinks between satellites.  At 7 km/s orbital velocity, the range is 
changing as much as 21 microseconds/second to a "stationary" observer.  
Now consider two satellites in different orbital planes. The dynamics of 
the latency get quite complex.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-15 Thread Magnus Danielson via time-nuts

Hi,

Expect network routes to be more dispersed these days, as it is needed.

While the wedge plot is a classic for NTP, it may be interesting to plot 
forward and backward path histograms independently.


Cheers,
Magnus

On 2021-12-15 16:25, Adam Space wrote:

Yeah I think it is localized. Network paths have been quite variable for
me. Every once in a while I start getting massive delays from the NIST
servers to my system, resulting in results like yours.

Interestingly though, time-e-g was one of the only servers that didn't have
this problem for me. This is a recent wedge plot for it. seems to be
working fine for me now, just with a variable outgoing delay causing
positive offsets, which seems to be more of a problem with my connection
than anything else.

On Tue, Dec 14, 2021 at 9:04 PM K5ROE Mike  wrote:


On 12/14/21 5:23 PM, Hal Murray wrote:

Out of curiosity, since you monitor NIST Gaithersburg, if you were to

average

over the offsets for a whole month, what kind of value would you get?

Surely

it is close to zero but I am curious how close. Within 1ms?

It depends.  Mostly on the routing between you and NIST.  If you are

closer,

the routing is more likely to be symmetric.

  From my experience, routing is generally stable on the scale of

months.  There

are short (hours) changes when a fiber gets cut or a router gets busted.
There are long term changes as people add fibers and/or change business

deals.

There are some cases where a stable routing will produce 2 answers: x%

of the

packets will be slightly faster/slower than most of them.  I think what's
going on is that the routers are doing load sharing on multiple paths,

hashing

on the address+port.  Or something like that.  So it's a roll of the dice
which path you get.



I'm in California.

NIST has NTP servers at 3 locations in the Boulder CO area: NIST, WWV,

and

Univ of Colorado.  (Google maps says WWV is 60 miles north of Bouler.

Univ of

Colorado is a few miles from NIST.)

  From a cheap cloud server (Digital Ocean) in San Francisco, the RTT to

NIST is

31.5 ms, to WWV is 32.1 ms, to Univ of Colorado is 54.5 ms.  The time

offsets

are about 1 ms for NIST and WWV and 12 ms for Univ of Colorado.

  From my home (AT via Sonic), 30 miles south of San Francisco, the

RTTs are

61 ms for NIST and WWV and 81-82 for Univ of Colorado.  Offsets are 6-7

ms for

NIST and WWV and 4-5 ms in the other direction for Univ of Colorado.



Might be a localized routing phenomenon.  Using my verizon connection from
Northern Virginia the results are awful for time-e-g.nist.gov:

   remote   refid  st t when poll reach   delay   offset
jitter

==
-192.168.1.219   68.69.221.61 2 u   56   64  3770.400   -0.290
  0.035
*192.168.1.224   .PPS.1 u1   16  3770.1840.087
  0.017
-129.6.15.26 .NIST.   1 u   32   64  377   93.087  -37.940
  7.867

However from my AWS machine in Oregon:

MS Name/IP address Stratum Poll Reach LastRx Last sample

===
^- 152.63.13.177 3   6   37763  -2011us[-2011us] +/-
128ms
^+ 209.182.153.6 2   7   37765   -959us[ -959us] +/-
  86ms
^- 64.139.66.105 3   6   377   128  -5838us[-5838us] +/-
134ms
^+ 129.6.15.26   1   6   37764  -2075us[-2075us] +/-
  37ms
^* 173.66.105.50 1   8   377   438   -448us[ -870us] +/-
  38ms


-mike
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-15 Thread Adam Space
Yeah I think it is localized. Network paths have been quite variable for
me. Every once in a while I start getting massive delays from the NIST
servers to my system, resulting in results like yours.

Interestingly though, time-e-g was one of the only servers that didn't have
this problem for me. This is a recent wedge plot for it. seems to be
working fine for me now, just with a variable outgoing delay causing
positive offsets, which seems to be more of a problem with my connection
than anything else.

On Tue, Dec 14, 2021 at 9:04 PM K5ROE Mike  wrote:

> On 12/14/21 5:23 PM, Hal Murray wrote:
> >
> >> Out of curiosity, since you monitor NIST Gaithersburg, if you were to
> average
> >> over the offsets for a whole month, what kind of value would you get?
> Surely
> >> it is close to zero but I am curious how close. Within 1ms?
> >
> > It depends.  Mostly on the routing between you and NIST.  If you are
> closer,
> > the routing is more likely to be symmetric.
> >
> >  From my experience, routing is generally stable on the scale of
> months.  There
> > are short (hours) changes when a fiber gets cut or a router gets busted.
> > There are long term changes as people add fibers and/or change business
> deals.
> >
> > There are some cases where a stable routing will produce 2 answers: x%
> of the
> > packets will be slightly faster/slower than most of them.  I think what's
> > going on is that the routers are doing load sharing on multiple paths,
> hashing
> > on the address+port.  Or something like that.  So it's a roll of the dice
> > which path you get.
> >
> > 
> >
> > I'm in California.
> >
> > NIST has NTP servers at 3 locations in the Boulder CO area: NIST, WWV,
> and
> > Univ of Colorado.  (Google maps says WWV is 60 miles north of Bouler.
> Univ of
> > Colorado is a few miles from NIST.)
> >
> >  From a cheap cloud server (Digital Ocean) in San Francisco, the RTT to
> NIST is
> > 31.5 ms, to WWV is 32.1 ms, to Univ of Colorado is 54.5 ms.  The time
> offsets
> > are about 1 ms for NIST and WWV and 12 ms for Univ of Colorado.
> >
> >  From my home (AT via Sonic), 30 miles south of San Francisco, the
> RTTs are
> > 61 ms for NIST and WWV and 81-82 for Univ of Colorado.  Offsets are 6-7
> ms for
> > NIST and WWV and 4-5 ms in the other direction for Univ of Colorado.
> >
> >
>
> Might be a localized routing phenomenon.  Using my verizon connection from
> Northern Virginia the results are awful for time-e-g.nist.gov:
>
>   remote   refid  st t when poll reach   delay   offset
> jitter
>
> ==
> -192.168.1.219   68.69.221.61 2 u   56   64  3770.400   -0.290
>  0.035
> *192.168.1.224   .PPS.1 u1   16  3770.1840.087
>  0.017
> -129.6.15.26 .NIST.   1 u   32   64  377   93.087  -37.940
>  7.867
>
> However from my AWS machine in Oregon:
>
> MS Name/IP address Stratum Poll Reach LastRx Last sample
>
> ===
> ^- 152.63.13.177 3   6   37763  -2011us[-2011us] +/-
> 128ms
> ^+ 209.182.153.6 2   7   37765   -959us[ -959us] +/-
>  86ms
> ^- 64.139.66.105 3   6   377   128  -5838us[-5838us] +/-
> 134ms
> ^+ 129.6.15.26   1   6   37764  -2075us[-2075us] +/-
>  37ms
> ^* 173.66.105.50 1   8   377   438   -448us[ -870us] +/-
>  38ms
>
>
> -mike
> ___
> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
> an email to time-nuts-le...@lists.febo.com
> To unsubscribe, go to and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-15 Thread Avamander
> The hardware are done by the network PHY and need 1588 support in the
driver, which is not common.

PTP HW timestamping is thankfully a bit more common than NTP one. From
which you can deduce that it's unlikely you'll get HW timestamping in NTP
context.

However, new Chrony has support for encapsulating NTP in PTP to utilize the
HW timestamping support for PTP. Thought I'd mention.

On Wed, Dec 15, 2021 at 4:03 AM Trent Piepho  wrote:

> On Tue, Dec 14, 2021 at 2:21 PM Hal Murray  wrote:
> >
> > > I've seen cards (ethtool) that support several time options - what
> are  they
> > > and how do I use them?
> >
> > I'm not sure which options you are referring to.
>
> Probably the flags from ethtool -T output:
>
> Time stamping parameters for eth0:
> Capabilities:
> hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
> software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
> hardware-receive  (SOF_TIMESTAMPING_RX_HARDWARE)
> software-receive  (SOF_TIMESTAMPING_RX_SOFTWARE)
> software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
> hardware-raw-clock(SOF_TIMESTAMPING_RAW_HARDWARE)
>
> The software ones are normally alway present and are what you describe
> with the kernel's timestamping.  The hardware are done by the network
> PHY and need 1588 support in the driver, which is not common.  I don't
> think any of the RPis support it.
>
> >
> > You can't have boxes on the internet update packets if you are
> interested in
> > security.
>
> That seems too restrictive.  Consider that TLS doesn't include the
> TCP/IP header, which can be modified by IP fragmentation, and that is
> still considered secure.
>
> I think one could design a protocol such that each appended timestamp
> is signed, and included a digest of all timestamps before it, so that
> while one does not trust every timestamp in the chain, it can be
> trusted that each timestamp was generated by entity that said it
> generated it and that any timestamps generated by a trusted entity
> were not later modified by a untrusted one.
> ___
> time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send
> an email to time-nuts-le...@lists.febo.com
> To unsubscribe, go to and follow the instructions there.
>
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-15 Thread Hal Murray
> results are awful for time-e-g.nist.gov

That box is busted/sick.  The IPv6 address is horrible.

If you want to discuss reasonably-normal operations, pick another system.

--

Nothing is ever totally useless.
It can always serve as a bad example.


-- 
These are my opinions.  I hate spam.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Trent Piepho
On Tue, Dec 14, 2021 at 2:21 PM Hal Murray  wrote:
>
> > I've seen cards (ethtool) that support several time options - what are  they
> > and how do I use them?
>
> I'm not sure which options you are referring to.

Probably the flags from ethtool -T output:

Time stamping parameters for eth0:
Capabilities:
hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE)
software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)
hardware-receive  (SOF_TIMESTAMPING_RX_HARDWARE)
software-receive  (SOF_TIMESTAMPING_RX_SOFTWARE)
software-system-clock (SOF_TIMESTAMPING_SOFTWARE)
hardware-raw-clock(SOF_TIMESTAMPING_RAW_HARDWARE)

The software ones are normally alway present and are what you describe
with the kernel's timestamping.  The hardware are done by the network
PHY and need 1588 support in the driver, which is not common.  I don't
think any of the RPis support it.

>
> You can't have boxes on the internet update packets if you are interested in
> security.

That seems too restrictive.  Consider that TLS doesn't include the
TCP/IP header, which can be modified by IP fragmentation, and that is
still considered secure.

I think one could design a protocol such that each appended timestamp
is signed, and included a digest of all timestamps before it, so that
while one does not trust every timestamp in the chain, it can be
trusted that each timestamp was generated by entity that said it
generated it and that any timestamps generated by a trusted entity
were not later modified by a untrusted one.
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread K5ROE Mike

On 12/14/21 5:23 PM, Hal Murray wrote:



Out of curiosity, since you monitor NIST Gaithersburg, if you were to average
over the offsets for a whole month, what kind of value would you get? Surely
it is close to zero but I am curious how close. Within 1ms?


It depends.  Mostly on the routing between you and NIST.  If you are closer,
the routing is more likely to be symmetric.

 From my experience, routing is generally stable on the scale of months.  There
are short (hours) changes when a fiber gets cut or a router gets busted.
There are long term changes as people add fibers and/or change business deals.

There are some cases where a stable routing will produce 2 answers: x% of the
packets will be slightly faster/slower than most of them.  I think what's
going on is that the routers are doing load sharing on multiple paths, hashing
on the address+port.  Or something like that.  So it's a roll of the dice
which path you get.



I'm in California.

NIST has NTP servers at 3 locations in the Boulder CO area: NIST, WWV, and
Univ of Colorado.  (Google maps says WWV is 60 miles north of Bouler.  Univ of
Colorado is a few miles from NIST.)

 From a cheap cloud server (Digital Ocean) in San Francisco, the RTT to NIST is
31.5 ms, to WWV is 32.1 ms, to Univ of Colorado is 54.5 ms.  The time offsets
are about 1 ms for NIST and WWV and 12 ms for Univ of Colorado.

 From my home (AT via Sonic), 30 miles south of San Francisco, the RTTs are
61 ms for NIST and WWV and 81-82 for Univ of Colorado.  Offsets are 6-7 ms for
NIST and WWV and 4-5 ms in the other direction for Univ of Colorado.




Might be a localized routing phenomenon.  Using my verizon connection from 
Northern Virginia the results are awful for time-e-g.nist.gov:


 remote   refid  st t when poll reach   delay   offset  jitter
==
-192.168.1.219   68.69.221.61 2 u   56   64  3770.400   -0.290   0.035
*192.168.1.224   .PPS.1 u1   16  3770.1840.087   0.017
-129.6.15.26 .NIST.   1 u   32   64  377   93.087  -37.940   7.867

However from my AWS machine in Oregon:

MS Name/IP address Stratum Poll Reach LastRx Last sample
===
^- 152.63.13.177 3   6   37763  -2011us[-2011us] +/-  128ms
^+ 209.182.153.6 2   7   37765   -959us[ -959us] +/-   86ms
^- 64.139.66.105 3   6   377   128  -5838us[-5838us] +/-  134ms
^+ 129.6.15.26   1   6   37764  -2075us[-2075us] +/-   37ms
^* 173.66.105.50 1   8   377   438   -448us[ -870us] +/-   38ms


-mike
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Hal Murray


> Out of curiosity, since you monitor NIST Gaithersburg, if you were to average
> over the offsets for a whole month, what kind of value would you get? Surely
> it is close to zero but I am curious how close. Within 1ms? 

It depends.  Mostly on the routing between you and NIST.  If you are closer, 
the routing is more likely to be symmetric.

>From my experience, routing is generally stable on the scale of months.  There 
are short (hours) changes when a fiber gets cut or a router gets busted.  
There are long term changes as people add fibers and/or change business deals.

There are some cases where a stable routing will produce 2 answers: x% of the 
packets will be slightly faster/slower than most of them.  I think what's 
going on is that the routers are doing load sharing on multiple paths, hashing 
on the address+port.  Or something like that.  So it's a roll of the dice 
which path you get.



I'm in California.

NIST has NTP servers at 3 locations in the Boulder CO area: NIST, WWV, and 
Univ of Colorado.  (Google maps says WWV is 60 miles north of Bouler.  Univ of 
Colorado is a few miles from NIST.)

>From a cheap cloud server (Digital Ocean) in San Francisco, the RTT to NIST is 
31.5 ms, to WWV is 32.1 ms, to Univ of Colorado is 54.5 ms.  The time offsets 
are about 1 ms for NIST and WWV and 12 ms for Univ of Colorado.

>From my home (AT via Sonic), 30 miles south of San Francisco, the RTTs are 
61 ms for NIST and WWV and 81-82 for Univ of Colorado.  Offsets are 6-7 ms for 
NIST and WWV and 4-5 ms in the other direction for Univ of Colorado.


-- 
These are my opinions.  I hate spam.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Hal Murray
> Is it ALWAYS there?

No.  You have to ask for it with setsockopt.

> Also cmsg doesn't make it clear what these auxiliary headers might  actually
> be, so that doesn't really leave me able to use this?

The type of the cmsg block will be the same as the option you turned on with 
setsockopt.

If you want a code sample, look in ntpd/ntp_packetstamp.c from ntpsec.
  https://gitlab.com/NTPsec/ntpsec


> What about receive offload?

I would assume that would maintain the same API.

There is also PTP, IEEE-1588 (I think).
I haven't gone down that rathole.


> I've seen cards (ethtool) that support several time options - what are  they
> and how do I use them? 

I'm not sure which options you are referring to.

The usual tangle with timing packets is that the hardware defaults to batching 
interrupts to avoid using lots of CPU cycles getting into and back out of the 
interrupt routine.   coalesce is the buzzword.  That adds a delay between the 
packet arrival and the time the interrupt line goes active.

Other time related options may be referring to PTP support.  The general idea 
with PTP is to have hardware do the timestamping, AND for routers/switches to 
update a slot in the packet when the packet is waiting around on the box.

You can't have boxes on the internet update packets if you are interested in 
security.


-- 
These are my opinions.  I hate spam.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Magnus Danielson via time-nuts

Hi,

On 2021-12-14 17:26, Steven Sommars wrote:

The Gaithersburg servers are accurate.  This plot shows Gaithersburg
time-e-g.nist.gov for the current month.
[image: image.png]
My monitoring client is located near Chicago and is Stratum-1 GPS sync'd.
Typical round-trip time to Gaithersburg is ~27 msec.
On 2021-12-07 a few monitoring polls saw RTT of ~100 msec.  This changes
the computed offset from
~0 msec to to ~40 msec (  (100-27)/2. ). Such transient increases are
often called  "popcorn spikes"
Many NTP clients including ntpd and chrony contain logic that identifies
and suppresses these outliers
Further, Gaithersburg is subject to fiber-cut outages and other planned &
unplanned network outages.
If you look carefully at the diagram, you can see a brief outage beginning
at about 2021-12-08 03:00 UTC.

I have other monitoring clients and can produce similar diagrams from other
locations.  A monitoring client
in Japan saw this for the same Gaithersburg server:
[image: image.png]
The delay spikes occur at different times and have different signs.  [The
2021-12-08 outage is still present]
See   http://leapsecond.com/ntp/NTP_Paper_Sommars_PTTI2017.pdf for a
discussion of why there are multiple
offset bands.  In the same paper there are examples of sustained high
delay, something that

Magnus summarized the situation.  Either asymmetric network delay or a
misbehaving NTP server can
cause the computed offset to be non-zero.The former is very common.
NTP servers, even stratum 1's
driven by GPS, are sometimes in error.


For sure. I've seen significant biases and jitter from bad servers. I 
just had to save the company from getting a worse situation as the 
IT-folks wanted to setup a new server in a virtualized machine. Having 
multiple propper machines in house, it was just a few things to fix to 
set it up.


Also, I assume that NIST have monitoring, and in fact if you look back I 
tossed a link that gave an in depth report from NIST on their current setup.


If actual offsets traceable to the actual machines of NIST is found, 
please report it into NIST, and I think that will be Judah Levine.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread alec

Okay so "man cmsg"

I've seen cards (ethtool) that support several time options - what are 
they and how do I use them?


Also cmsg doesn't make it clear what these auxiliary headers might 
actually be, so that doesn't really leave me able to use this?


Is it ALWAYS there? What about receive offload?

Alec

---


On 2021-12-14 20:57, Hal Murray wrote:

a...@unifiedmathematics.com said:
I've seen SO_TIMESTAMP and friends in ethtool but I have no idea what 
it  is

or how it works, can you point me in the right direction please?


man 7 socket on a Linux box.  Then man recvmsg and man cmsg

There are several variations on various OSes.


The basic idea is that the OS input processing grabs a time stamp when 
the
packet arrives.  On Linux, that's done by the kernel thread that looks 
in the

packet headers to figure out which input queue the packet goes on.

The basic idea is that you get a time stamp when the packet arrives 
rather

than when the user program gets around to read/recv-ing it.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Hal Murray


a...@unifiedmathematics.com said:
> I've seen SO_TIMESTAMP and friends in ethtool but I have no idea what it  is
> or how it works, can you point me in the right direction please? 

man 7 socket on a Linux box.  Then man recvmsg and man cmsg

There are several variations on various OSes.


The basic idea is that the OS input processing grabs a time stamp when the 
packet arrives.  On Linux, that's done by the kernel thread that looks in the 
packet headers to figure out which input queue the packet goes on.

The basic idea is that you get a time stamp when the packet arrives rather 
than when the user program gets around to read/recv-ing it.

-- 
These are my opinions.  I hate spam.


___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Bob kb8tq
Hi

If you are on something like a cable modem (or similar), your network delays
will not average out to zero over any period of time. Sorting out delays on
“your end” from “their end” can be difficult. 

Bob

> On Dec 14, 2021, at 12:56 PM, Adam Space  wrote:
> 
> Thanks for your response and for your paper link. It is very interesting: I
> enjoyed looking it over and learned a lot. And you are definitely right, it
> is surely asymmetric network delay that is causing this problem.
> 
> Out of curiosity, since you monitor NIST Gaithersburg, if you were to
> average over the offsets for a whole month, what kind of value would you
> get? Surely it is close to zero but I am curious how close. Within 1ms?
> 
> Adam
> 
> On Tue, Dec 14, 2021 at 11:27 AM Steven Sommars 
> wrote:
> 
>> The Gaithersburg servers are accurate.  This plot shows Gaithersburg
>> time-e-g.nist.gov for the current month.
>> [image: image.png]
>> My monitoring client is located near Chicago and is Stratum-1 GPS sync'd.
>> Typical round-trip time to Gaithersburg is ~27 msec.
>> On 2021-12-07 a few monitoring polls saw RTT of ~100 msec.  This changes
>> the computed offset from
>> ~0 msec to to ~40 msec (  (100-27)/2. ). Such transient increases are
>> often called  "popcorn spikes"
>> Many NTP clients including ntpd and chrony contain logic that identifies
>> and suppresses these outliers
>> Further, Gaithersburg is subject to fiber-cut outages and other planned &
>> unplanned network outages.
>> If you look carefully at the diagram, you can see a brief outage beginning
>> at about 2021-12-08 03:00 UTC.
>> 
>> I have other monitoring clients and can produce similar diagrams from other
>> locations.  A monitoring client
>> in Japan saw this for the same Gaithersburg server:
>> [image: image.png]
>> The delay spikes occur at different times and have different signs.  [The
>> 2021-12-08 outage is still present]
>> See   http://leapsecond.com/ntp/NTP_Paper_Sommars_PTTI2017.pdf for a
>> discussion of why there are multiple
>> offset bands.  In the same paper there are examples of sustained high
>> delay, something that
>> popcorn spike suppressors cannot eliminate.
>> 
>> Magnus summarized the situation.  Either asymmetric network delay or a
>> misbehaving NTP server can
>> cause the computed offset to be non-zero.The former is very common.
>> NTP servers, even stratum 1's
>> driven by GPS, are sometimes in error.
>> 
>> 
>> Steve Sommars
>> 
>> 
>> 
>> 
>> On Tue, Dec 14, 2021 at 4:21 AM Magnus Danielson via time-nuts <
>> time-nuts@lists.febo.com> wrote:
>> 
>>> Hi,
>>> 
>>> On 2021-12-14 02:26, Adam Space wrote:
 I'm not sure if anyone else uses the NIST's NTP servers, but I've
>> noticed
 that the offsets I'm getting from Gaithersburg servers seem to be
 really far off, like 40-50 ms off. This is pretty odd since they
>> usually
 have a 2 - 3 ms accuracy at worst.
 
 It is interesting to think about what is going on here. NIST has a
>>> secondary
 time scale
 <
>>> 
>> https://www.nist.gov/pml/time-and-frequency-division/time-services/utcnist-time-scale/secondary-utcnist-time-scales-and
 
 at Gaithersburg, maintained by a couple of caesium clocks that are
 typically kept within 20ns of UTC(NIST), i.e. their primary time scale
>> in
 Boulder. They also host their remote time transfer calibration service
>>> and
 their Internet Time Service (i.e. NTP servers) out of Gaithersburg.
 
 It seems highly unlikely that their time scale there is that far off.
>> One
 thing that immediately comes to mind is asymmetric network delays
>> causing
 this. I do think this has to be the reason for the large discrepancy,
>> but
 even so, it is an impressive feat of asymmetric path delays. The
>> maximum
 error in offset from a client to server due to asymmetric network
>> delays
>>> is
 one half of the delay. (This corresponds to one path being
>> instantaneous
 and the other path taking the entire delay time). When I query their
 servers, I am getting about a 45ms offset, and a delay of around 100ms.
 This would mean the maximum error due to asymmetric path delays is
>> around
 50ms--and less even if we're being realistic (one of the delays can't
 literally be zero). Basically, for the offset error to be accounted for
 primarily by asymmetric network delays, the delays would have to be
>>> *very*
 asymmetric.
>>> 
>>> For the asymmetry to be 45 ms, the difference between forward and
>>> backward path would need to be 90 ms, since the time error will be the
>>> difference in delay divided by 2. The round trip time is instead the sum
>>> of the two delays.
>>> 
>>> Now, as you observe this between two clocks with a time, difference, the
>>> time-difference add onto the TE, but does not show up on the RTT.
>>> 
>>> So, 90 ms difference would fit, but delays would be 95 ms and 5 ms +/- 5
>>> ms, since we can trust the RTT to be unbiased. Now we 

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread alec
I've seen SO_TIMESTAMP and friends in ethtool but I have no idea what it 
is or how it works, can you point me in the right direction please?


Alec

---


On 2021-12-14 18:12, Steven Sommars wrote:

On Tue, Dec 14, 2021 at 11:40 AM Michael Rothwell 
wrote:

I see some high offsets on my local monitoring (synced to 
time.google.com

).
Especially time-e-b on ipv6.



This observation is correct and is caused by queuing delay on the NIST
server.  NIST NTP mode 3 requests
are apparently time-stamped at application level, rather than using
SO_TIMESTAMP or related techniques.
As an exercise, compare the NTP (UDP port 123) and TIME (UDP port 37)
delays.

Steve Sommars
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe
send an email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.

___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Adam Space
Thanks for your response and for your paper link. It is very interesting: I
enjoyed looking it over and learned a lot. And you are definitely right, it
is surely asymmetric network delay that is causing this problem.

Out of curiosity, since you monitor NIST Gaithersburg, if you were to
average over the offsets for a whole month, what kind of value would you
get? Surely it is close to zero but I am curious how close. Within 1ms?

Adam

On Tue, Dec 14, 2021 at 11:27 AM Steven Sommars 
wrote:

> The Gaithersburg servers are accurate.  This plot shows Gaithersburg
> time-e-g.nist.gov for the current month.
> [image: image.png]
> My monitoring client is located near Chicago and is Stratum-1 GPS sync'd.
> Typical round-trip time to Gaithersburg is ~27 msec.
> On 2021-12-07 a few monitoring polls saw RTT of ~100 msec.  This changes
> the computed offset from
> ~0 msec to to ~40 msec (  (100-27)/2. ). Such transient increases are
> often called  "popcorn spikes"
> Many NTP clients including ntpd and chrony contain logic that identifies
> and suppresses these outliers
> Further, Gaithersburg is subject to fiber-cut outages and other planned &
> unplanned network outages.
> If you look carefully at the diagram, you can see a brief outage beginning
> at about 2021-12-08 03:00 UTC.
>
> I have other monitoring clients and can produce similar diagrams from other
> locations.  A monitoring client
> in Japan saw this for the same Gaithersburg server:
> [image: image.png]
> The delay spikes occur at different times and have different signs.  [The
> 2021-12-08 outage is still present]
> See   http://leapsecond.com/ntp/NTP_Paper_Sommars_PTTI2017.pdf for a
> discussion of why there are multiple
> offset bands.  In the same paper there are examples of sustained high
> delay, something that
> popcorn spike suppressors cannot eliminate.
>
> Magnus summarized the situation.  Either asymmetric network delay or a
> misbehaving NTP server can
> cause the computed offset to be non-zero.The former is very common.
> NTP servers, even stratum 1's
> driven by GPS, are sometimes in error.
>
>
> Steve Sommars
>
>
>
>
> On Tue, Dec 14, 2021 at 4:21 AM Magnus Danielson via time-nuts <
> time-nuts@lists.febo.com> wrote:
>
> > Hi,
> >
> > On 2021-12-14 02:26, Adam Space wrote:
> > > I'm not sure if anyone else uses the NIST's NTP servers, but I've
> noticed
> > > that the offsets I'm getting from Gaithersburg servers seem to be
> > > really far off, like 40-50 ms off. This is pretty odd since they
> usually
> > > have a 2 - 3 ms accuracy at worst.
> > >
> > > It is interesting to think about what is going on here. NIST has a
> > secondary
> > > time scale
> > > <
> >
> https://www.nist.gov/pml/time-and-frequency-division/time-services/utcnist-time-scale/secondary-utcnist-time-scales-and
> > >
> > > at Gaithersburg, maintained by a couple of caesium clocks that are
> > > typically kept within 20ns of UTC(NIST), i.e. their primary time scale
> in
> > > Boulder. They also host their remote time transfer calibration service
> > and
> > > their Internet Time Service (i.e. NTP servers) out of Gaithersburg.
> > >
> > > It seems highly unlikely that their time scale there is that far off.
> One
> > > thing that immediately comes to mind is asymmetric network delays
> causing
> > > this. I do think this has to be the reason for the large discrepancy,
> but
> > > even so, it is an impressive feat of asymmetric path delays. The
> maximum
> > > error in offset from a client to server due to asymmetric network
> delays
> > is
> > > one half of the delay. (This corresponds to one path being
> instantaneous
> > > and the other path taking the entire delay time). When I query their
> > > servers, I am getting about a 45ms offset, and a delay of around 100ms.
> > > This would mean the maximum error due to asymmetric path delays is
> around
> > > 50ms--and less even if we're being realistic (one of the delays can't
> > > literally be zero). Basically, for the offset error to be accounted for
> > > primarily by asymmetric network delays, the delays would have to be
> > *very*
> > > asymmetric.
> >
> > For the asymmetry to be 45 ms, the difference between forward and
> > backward path would need to be 90 ms, since the time error will be the
> > difference in delay divided by 2. The round trip time is instead the sum
> > of the two delays.
> >
> > Now, as you observe this between two clocks with a time, difference, the
> > time-difference add onto the TE, but does not show up on the RTT.
> >
> > So, 90 ms difference would fit, but delays would be 95 ms and 5 ms +/- 5
> > ms, since we can trust the RTT to be unbiased. Now we come to what is
> > physical possible, and 5 ms is 1000 km fiber delay. You can calculate
> > yourself from your location the minimum distance and thus delay. In
> > practice fiber is pulled not as straight as one would wish. I use at
> > least square root of 2 as multiplier, but many agree that this is still
> > optimistic and it can 

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Steven Sommars
On Tue, Dec 14, 2021 at 11:40 AM Michael Rothwell 
wrote:

> I see some high offsets on my local monitoring (synced to time.google.com
> ).
> Especially time-e-b on ipv6.
>

This observation is correct and is caused by queuing delay on the NIST
server.  NIST NTP mode 3 requests
are apparently time-stamped at application level, rather than using
SO_TIMESTAMP or related techniques.
As an exercise, compare the NTP (UDP port 123) and TIME (UDP port 37)
delays.

Steve Sommars
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Steven Sommars
The Gaithersburg servers are accurate.  This plot shows Gaithersburg
time-e-g.nist.gov for the current month.
[image: image.png]
My monitoring client is located near Chicago and is Stratum-1 GPS sync'd.
Typical round-trip time to Gaithersburg is ~27 msec.
On 2021-12-07 a few monitoring polls saw RTT of ~100 msec.  This changes
the computed offset from
~0 msec to to ~40 msec (  (100-27)/2. ). Such transient increases are
often called  "popcorn spikes"
Many NTP clients including ntpd and chrony contain logic that identifies
and suppresses these outliers
Further, Gaithersburg is subject to fiber-cut outages and other planned &
unplanned network outages.
If you look carefully at the diagram, you can see a brief outage beginning
at about 2021-12-08 03:00 UTC.

I have other monitoring clients and can produce similar diagrams from other
locations.  A monitoring client
in Japan saw this for the same Gaithersburg server:
[image: image.png]
The delay spikes occur at different times and have different signs.  [The
2021-12-08 outage is still present]
See   http://leapsecond.com/ntp/NTP_Paper_Sommars_PTTI2017.pdf for a
discussion of why there are multiple
offset bands.  In the same paper there are examples of sustained high
delay, something that
popcorn spike suppressors cannot eliminate.

Magnus summarized the situation.  Either asymmetric network delay or a
misbehaving NTP server can
cause the computed offset to be non-zero.The former is very common.
NTP servers, even stratum 1's
driven by GPS, are sometimes in error.


Steve Sommars




On Tue, Dec 14, 2021 at 4:21 AM Magnus Danielson via time-nuts <
time-nuts@lists.febo.com> wrote:

> Hi,
>
> On 2021-12-14 02:26, Adam Space wrote:
> > I'm not sure if anyone else uses the NIST's NTP servers, but I've noticed
> > that the offsets I'm getting from Gaithersburg servers seem to be
> > really far off, like 40-50 ms off. This is pretty odd since they usually
> > have a 2 - 3 ms accuracy at worst.
> >
> > It is interesting to think about what is going on here. NIST has a
> secondary
> > time scale
> > <
> https://www.nist.gov/pml/time-and-frequency-division/time-services/utcnist-time-scale/secondary-utcnist-time-scales-and
> >
> > at Gaithersburg, maintained by a couple of caesium clocks that are
> > typically kept within 20ns of UTC(NIST), i.e. their primary time scale in
> > Boulder. They also host their remote time transfer calibration service
> and
> > their Internet Time Service (i.e. NTP servers) out of Gaithersburg.
> >
> > It seems highly unlikely that their time scale there is that far off. One
> > thing that immediately comes to mind is asymmetric network delays causing
> > this. I do think this has to be the reason for the large discrepancy, but
> > even so, it is an impressive feat of asymmetric path delays. The maximum
> > error in offset from a client to server due to asymmetric network delays
> is
> > one half of the delay. (This corresponds to one path being instantaneous
> > and the other path taking the entire delay time). When I query their
> > servers, I am getting about a 45ms offset, and a delay of around 100ms.
> > This would mean the maximum error due to asymmetric path delays is around
> > 50ms--and less even if we're being realistic (one of the delays can't
> > literally be zero). Basically, for the offset error to be accounted for
> > primarily by asymmetric network delays, the delays would have to be
> *very*
> > asymmetric.
>
> For the asymmetry to be 45 ms, the difference between forward and
> backward path would need to be 90 ms, since the time error will be the
> difference in delay divided by 2. The round trip time is instead the sum
> of the two delays.
>
> Now, as you observe this between two clocks with a time, difference, the
> time-difference add onto the TE, but does not show up on the RTT.
>
> So, 90 ms difference would fit, but delays would be 95 ms and 5 ms +/- 5
> ms, since we can trust the RTT to be unbiased. Now we come to what is
> physical possible, and 5 ms is 1000 km fiber delay. You can calculate
> yourself from your location the minimum distance and thus delay. In
> practice fiber is pulled not as straight as one would wish. I use at
> least square root of 2 as multiplier, but many agree that this is still
> optimistic and it can be far worse.
>
> What can cause such delay in a network? In IP/MPLS, the routing
> typically does not care about forward and backward direction being the
> same. Rather, they trim it to shed the load, i.e. Traffic Engineering.
> That means that for two pair of nodes in that network, it can be sent
> over a shorter path in one direction and longer in the other. In
> addition, buffer fill levels can be high on a path, meaning that you end
> up in the end of a buffer for each router hop due to traffic load. Delay
> is a means to throttle TCP down in rate. Random Early Discard (RED) is
> meant to spread that evenly between streams to cause throttle earlier
> than dropping packets due to full 

[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Magnus Danielson via time-nuts

Hi,

On 2021-12-14 02:26, Adam Space wrote:

I'm not sure if anyone else uses the NIST's NTP servers, but I've noticed
that the offsets I'm getting from Gaithersburg servers seem to be
really far off, like 40-50 ms off. This is pretty odd since they usually
have a 2 - 3 ms accuracy at worst.

It is interesting to think about what is going on here. NIST has a secondary
time scale

at Gaithersburg, maintained by a couple of caesium clocks that are
typically kept within 20ns of UTC(NIST), i.e. their primary time scale in
Boulder. They also host their remote time transfer calibration service and
their Internet Time Service (i.e. NTP servers) out of Gaithersburg.

It seems highly unlikely that their time scale there is that far off. One
thing that immediately comes to mind is asymmetric network delays causing
this. I do think this has to be the reason for the large discrepancy, but
even so, it is an impressive feat of asymmetric path delays. The maximum
error in offset from a client to server due to asymmetric network delays is
one half of the delay. (This corresponds to one path being instantaneous
and the other path taking the entire delay time). When I query their
servers, I am getting about a 45ms offset, and a delay of around 100ms.
This would mean the maximum error due to asymmetric path delays is around
50ms--and less even if we're being realistic (one of the delays can't
literally be zero). Basically, for the offset error to be accounted for
primarily by asymmetric network delays, the delays would have to be *very*
asymmetric.


For the asymmetry to be 45 ms, the difference between forward and 
backward path would need to be 90 ms, since the time error will be the 
difference in delay divided by 2. The round trip time is instead the sum 
of the two delays.


Now, as you observe this between two clocks with a time, difference, the 
time-difference add onto the TE, but does not show up on the RTT.


So, 90 ms difference would fit, but delays would be 95 ms and 5 ms +/- 5 
ms, since we can trust the RTT to be unbiased. Now we come to what is 
physical possible, and 5 ms is 1000 km fiber delay. You can calculate 
yourself from your location the minimum distance and thus delay. In 
practice fiber is pulled not as straight as one would wish. I use at 
least square root of 2 as multiplier, but many agree that this is still 
optimistic and it can be far worse.


What can cause such delay in a network? In IP/MPLS, the routing 
typically does not care about forward and backward direction being the 
same. Rather, they trim it to shed the load, i.e. Traffic Engineering. 
That means that for two pair of nodes in that network, it can be sent 
over a shorter path in one direction and longer in the other. In 
addition, buffer fill levels can be high on a path, meaning that you end 
up in the end of a buffer for each router hop due to traffic load. Delay 
is a means to throttle TCP down in rate. Random Early Discard (RED) is 
meant to spread that evenly between streams to cause throttle earlier 
than dropping packets due to full buffers, but it still means dropping 
packets. That affects UDP traffic too. MPLS-TE then tries to work on 
that on a secondary level.


With that, depending on your actual distance, which I do not know, it 
becomes fuzzy if the network or servers have asymmetry. If you have 
enough distance, then some of the time error can not be allocated to 
network asymmetry as the short path needs to be higher. This then needs 
to be allocated over to clock errors.


All this is a result of having three unknowns and two measures, you 
cannot fully resolve that equation system. It needs aiding. Having the 
right time on one end does not help if one attempts to know the time 
error of the other end.


It would help if you could add observations from other locations near 
Gaithersburg, network wise.



Is anyone else experiencing the same thing?


Which makes this question very relevant. Measuring with less of the 
biases and noise of the network may provide clearer answer on the 
Gaithersburg servers.


Cheers,
Magnus
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.


[time-nuts] Re: NIST NTP servers way off for anyone else?

2021-12-14 Thread Sanjeev Gupta
-- 
Sanjeev Gupta
+65 98551208 http://www.linkedin.com/in/ghane


On Tue, Dec 14, 2021 at 5:41 PM Adam Space  wrote:

> I'm not sure if anyone else uses the NIST's NTP servers, but I've noticed
> that the offsets I'm getting from Gaithersburg servers seem to be
> really far off, like 40-50 ms off. This is pretty odd since they usually
> have a 2 - 3 ms accuracy at worst.
>
> 

Is anyone else experiencing the same thing?
>

>From Singapore:

root@netmon2:~# ntpq -pu
 remote   refid  st t when poll reach   delay   offset
jitter
===
-robusta.dcs1.bi 210.23.25.77 2 8   11   64  377 253.57us 204.53us
27.325us
-time-a-g.nist.g .NIST.   1 u-  128  377 230.06ms -2.252ms
129.46us
-time-e-wwv.nist .NIST.   1 u   12   64  377 198.49ms -2.693ms
92.647us
 3.sg.pool.ntp.o .POOL.  16 p-  2560  0ns  0ns
 238ns
*210.23.25.77.GPS.1 u8  128  377 1.2021ms 41.357us
412.55us
+122.11.221.24   210.23.25.77 2 u   15   64  377 1.9093ms -84.42us
175.80us
+178.128.28.21   87.10.119.42 3 u   30   64  377 1.1845ms -70.67us
62.820us
-time.cloudflare 10.23.11.254 3 u  100  128  377 38.403ms -2.658ms
89.425us
+t1.time.sg3.yah 106.10.133.182 u   25   64  377 1.3968ms -468.5us
247.94us

Note the time-e-www server is over IPv6
___
time-nuts mailing list -- time-nuts@lists.febo.com -- To unsubscribe send an 
email to time-nuts-le...@lists.febo.com
To unsubscribe, go to and follow the instructions there.