Re: [ntp:questions] First attempt GPSD/PPS -NTP time server

2008-01-27 Thread Hal Murray

I am afraid I simply do not believe this. NMEA is lucky to get a ms not a
usec. The offset on the NMEA should be a lot bigger than .001

The NMEA driver includes built-in PPS support.

-- 
These are my opinions, not necessarily my employer's.  I hate spam.

___
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions


Re: [ntp:questions] comparing 2 NTP implementations

2008-01-27 Thread David Woolley
Folkert van Heusden wrote:

 I would like to compare 2 NTP implementations. What would be the best
 way?

The biggest problem is finding out the time on the machines without 
using NTP.  One approach is to use a simulator, but that assumes that 
the simulator correctly represents clock imperfections and changes in in 
the environment.

It is also fairly easy to output a quite accurate indication of the time 
that the machine thinks it has, provided that you have a local (non-USB) 
parallel port.  However, the problem there is that they will not output 
at the same time, which means that you cannot use very simple hardware 
to measure the difference, but will need hardware that accurately log 
both the actual time of the report and the time the reporter thought it 
had.  The clock for this probably doesn't have to be too accurate, 
providing that you monitor your source of true time frequently, but it 
does have to have good precision and predictable latency.

You cannot output at the same time because of indeterminate interrupt 
latency and because modern systems interpolate between clock interrupts 
and correct the time by adjusting the interpolation, not by aligning the 
clock interrupts onto the exact 100ms.

The other requirement, that has been noted in the recent chrony 
discussion, is that you must run the test with real workloads on the 
machines.

 I was thinking of configuring 7 upstream servers on these 2 physical
 servers and then on a third pc (which is also synced against these 7)
 check the difference?

You need to have physically distinct servers, so that clock variations 
are not correlated.  They ought to be in different environments. 
Alternatively, the server needs to directly read a high stability 
hardware clock and simulate perturbations.

 

___
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions


Re: [ntp:questions] First attempt GPSD/PPS -NTP time server

2008-01-27 Thread David Woolley
Unruh wrote:
 
 He was refering solely to the NMEA signal not the PPS. Some GPS receovers
 have no pps.

In general those are not suited to accurate time transfer, and ones with 
PPS cost a lot less than the the commodity car navigation devices, 
because they don't have loads of map data (the price difference is 
probably more in the US than the UK, though).  The de facto standard for 
cheap GPS time sources is the Garmin GPS 18 LVC, which was about USD 60 
or GBP 60 when I did a not very thorough search, recently.

 
 I am afraid I simply do not believe this. NMEA is lucky to get a ms not a
 usec. The offset on the NMEA should be a lot bigger than .001

The clock driver is not reporting the time the NMEA sentence arrived, 
but rather the PPS time for that sentence.

___
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions


Re: [ntp:questions] HBG down?

2008-01-27 Thread Folkert van Heusden
Hmmm it seems the problem is somewhat different:
- pc 1 has fine reception
- pc 2, both with on-board and external (= pci board with serial ports)
  doesn't seem to receive even one single bit

I tested it by configuring a dcf-77 receiver in ntp on pc-1 (hbg is
dcf-77 protocol) and the same on pc-2 as well as testdcf (from parseutil
directory from ntpd sources). pc 1 has a regular Gigabyte 965G-DS3
motherboard, pc-2 has a Via Epia-SP motherboard.

On Sun, Jan 27, 2008 at 02:46:40PM +0100, Folkert van Heusden wrote:
 Anyone out there with an HBG (swiss time signal) receiver? Are you also
 having very bad reception?
 
 
 Folkert van Heusden
 
 -- 
 Multitail est un outil permettant la visualisation de fichiers de
 journalisation et/ou le suivi de l'exécution de commandes. Filtrage,
 mise en couleur de mot-clé, fusions, visualisation de différences
 (diff-view), etc.  http://www.vanheusden.com/multitail/
 --
 Phone: +31-6-41278122, PGP-key: 1F28D8AE, www.vanheusden.com
 ___
 questions mailing list
 questions@lists.ntp.org
 https://lists.ntp.org/mailman/listinfo/questions


Folkert van Heusden

-- 
www.vanheusden.com/multitail - win een vlaai van multivlaai! zorg
ervoor dat multitail opgenomen wordt in Fedora Core, AIX, Solaris of
HP/UX en win een vlaai naar keuze
--
Phone: +31-6-41278122, PGP-key: 1F28D8AE, www.vanheusden.com
___
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions


Re: [ntp:questions] NTP vs chrony comparison (Was: oscillations in ntp clock synchronization)

2008-01-27 Thread Danny Mayer
David L. Mills wrote:
 It's easy to make your own Allan characteristic. Just let the computer 
 clock free-run for a couple of weeks and record the offset relative to a 
 known and stable standard, preferable at the smallest poll interval you 
 can. The PPS from a GPS receiver is an ideal source, but you have to 
 jerry-rig a means to capture each transition.
 
 Compute the RMS frequency differences, decimate and repeat. Don't take 
 the following seriously, I lifted it without considering context, but 
 that's the general idea. Be very careful about missing data, etc., as 
 that creates spectral lines that mess up the plot.
 
 p = w; r = diff(x); q = y; i = 1; d = 1;
 while (length(q) = 10)
  u = diff(p) / d;
  x2(i) = sqrt(mean(u .* u) / 2);
  u = diff(r) / d;
  x1(i) = sqrt(mean(u .* u) / 2);
  u = diff(q);
  y1(i) = sqrt(mean(u .* u) / 2);
  p = p(1:2:length(p));
  r = r(1:2:length(r));
  q = q(1:2:length(q));
  m1(i) = d; i = i + 1; d = d * 2;
 end
 loglog(m1, x2 * 1e6, m1, x1 * 1e6, m1, y1 * 1e6, m1, (x1 + y1) * 1e6)
 axis([1 1e5 1e-4 100]);
 xlabel('Time Interval (s)');
 ylabel('Allan Deviation (PPM)');
 print -dtiff allan
 
 Dave

And for those of you who didn't recognize it, that's MatLab code.

Danny
___
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions


Re: [ntp:questions] NTP vs chrony comparison (Was: oscillations in ntp clock synchronization)

2008-01-27 Thread Danny Mayer
David L. Mills wrote:
 Danny,
 
 Unless the computer clock intrinsic frequency error is huge, the only 
 time the 500-PPM kicks in is with a 100-ms step transient and poll 
 interval 16 s. The loop still works if it hits the stops; it just can't 
 drive the offset to zero.
 
 Dave

Yes, I found this out when my laptop stopped disciplined the clock and 
was complaining about the frequency limits and I started digging into 
the code to figure out why.

Danny
___
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions


Re: [ntp:questions] comparing 2 NTP implementations

2008-01-27 Thread brian . utterback
On Jan 26, 8:05 pm, [EMAIL PROTECTED] (Folkert van Heusden)
wrote:
 Hi,

 I would like to compare 2 NTP implementations. What would be the best
 way?
 I was thinking of configuring 7 upstream servers on these 2 physical
 servers and then on a third pc (which is also synced against these 7)
 check the difference?

There is no point even starting to do a comparsion until you have
defined
your objectives. Is the only thing you care about is keeping the
correct
time over a LAN using your own servers? Or do you want to discipline
the
clock frequency and offset by using various servers over the Internet
and
providing time to other downstream clients over a WAN? Or something
inbetween?

If the former, then running ntpdate every minute might do the trick.
If the
latter, you will be hard pressed to find anything better than the NTP
ref
implementation from the University of Deleware. But unless you know
what you
are trying to achieve, you will not know if you have got it.

Brian Utterback

___
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions


Re: [ntp:questions] comparing 2 NTP implementations

2008-01-27 Thread Ryan Malayter
On Jan 26, 7:05 pm, [EMAIL PROTECTED] (Folkert van Heusden)
wrote:
 Hi,

 I would like to compare 2 NTP implementations. What would be the best
 way?
 I was thinking of configuring 7 upstream servers on these 2 physical
 servers and then on a third pc (which is also synced against these 7)
 check the difference?


I assume you want to compare how the NTP implementations appear to
the rest of the local network. If so, the 3rd PC would really need a
GPS-disciplined clock to be considered a reliable witness. Even
then, the variations in network conditions the servers would
experience could be problematic, even if they are configured with the
same upstream reference servers.

And of course, you would want to flip-flop the two machines halfway
through the experiment to be sure the NTP implementation, and not the
differing hardware, is the source of any observerd differences.

To really do it right would require a lot of laboratory equipment,
or so I am guessing.

Regards,
Ryan

___
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions


Re: [ntp:questions] strange behaviour of ntp peerstats entries.

2008-01-27 Thread Unruh
[EMAIL PROTECTED] (Danny Mayer) writes:

Unruh wrote:
 [EMAIL PROTECTED] (Danny Mayer) writes:
 
 Unruh wrote:
 Brian Utterback [EMAIL PROTECTED] writes:

 Unruh wrote:
 David L. Mills [EMAIL PROTECTED] writes:
 You might not have noticed a couple of crucial issues in the clock 
 filter code.
 I did notice them all. Thus my caveate. However throwing away 80% of the
 precious data you have seems excessive.
 Note that the situation can arise that the one can wait many more than 8
 samples for another one. Say sample i is a good one. and remains the best
 for the next 7 tries. Sample i+7 is slightly worse than sample i and thus
 it is not picked as it comes in. But the next i samples are all worse than
 it. Thus it remains the filtered one, but is never used because it was not
 the best when it came in. This situation could keep going for a long time,
 meaning that ntp suddenly has no data to do anything with for many many
 poll intervals. Surely using sample i+7 is far better than  not using any
 data for that length of time.
 
 On the contrary, it's better not to use the data at all if its suspect. 
 ntpd is designed to continue to work well even in the event of loosing 
 all access to external sources for extended periods.
 
 And this could happen again. Now, since the
 delays are presumably random variables, the chances of this happening are
 not great ( although under a condition of gradually worsening network the
 chances are not that small), but since one is running ntp for millions or
 billions of samples, the chances of this happening sometime becomes large. 

 
 There are quite a few ntpd servers which are isolated and once an hour 
 use ACTS to fetch good time samples. This is not rare at all.
 
 And then promplty throw them away because they do not satify the minimum
 condition? No, it is not best to throw away data no matter how suspect.
 Data is a preecious comodity and should be thrown away only if you are damn
 sure it cannot help you. For example lets say that the change in delay is
 .1 of the variance of the clock. The max extra noise that delay can cause
 is about .01 Yet NTP will chuck it. Now if the delay is 100 times the
 variance, sure chuck it. It probably cannot help you. The delay is a random
 process, non-gaussian admitedly, and its effect on the time is also a
 random process-- usually much closer to gaussian. And why was the figure of
 8 chosen ( the best of the last 8 tries) why not 1? or 3? I suspect it
 came off the top of someone's head-- lets not throuw away too much stuff,
 since it would make ntp unseable, but lets throw away some to feel
 virtuous. Sorry for being sarcastic, but I would really like to know what
 the justification was for throwing so much data away.

No, 8 was chosen after a lot of experimentation to ensure the best 
results over a wide range of configurations. Dave has adjusted these 
numbers over the years and he's the person to ask.


OK. The usual comment is that you throw away about 40% of the data using
the median filter (eg looking at the shm refclock program where that
40%figure is attributed to him and in ntp as well). But here one is trowing
away over 80% ( Ie keeping less than 1/6 of the data).
Running a very quick test on one system on my lan, I find that this changes
the variance of the offsets by about 10%. Ie, it makes only a marginal
difference to the variance. ( and yes, there is a fair amount of
correlation between the offset fluctuation and the delay fluctutation.
(correlation coefficient .5) . Actually the main thing this seems to do is
to make the variance in the delay times small, not the variance in the
offset.

I am also a little bit surprized that it is the delay that is used and not
the total roundtrip time. As I seem to read it, the delay is (t4-t3+t2-t1)
ie, it does not take into account the delay within the far machinei (eg
t4-t1), but
only propagation delay. I would expect that the former might even be more
important than the latter, but that is a pure guess-- ie no measurements on
even one system to back it up. 
Now it may be that on that rocky road to Manila, the propagation delay is
by far the most important, but on a moderm lan, especially with a low
propagation delay of hundreds of usec rather then 100s of msec, I wonder. 

I munged ntp record_peer_stats to also print out the p_off and p_del, (ie
the immediate offset and delay of the current packet) and counted up in the
output how often peer-off and p_off are different from each other,
indicating a thrown away packet of data. I got 83% of the time.


___
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions


Re: [ntp:questions] strange behaviour of ntp peerstats entries.

2008-01-27 Thread David L. Mills
Danny,

True; there is an old RFC or IEN that reports the results with varying 
numbers of clock filter stages, from which the number eight was the 
best. Keep in mind these experiments were long ago and with, as I 
remember, ARPAnet sources. The choice might be different today, but 
probably would not result in great improvment in the general cases. Note 
however that the popcorn spike supressor is a very real Internet add-on.

The number of stages  may have unforseen consequences. The filter can 
(and often does) introduce additional delay in the feedback loop. The 
loop time constant takes this into account so the impulse response is 
only marginaly affected. So, the loop is really engineered for good 
response with one accepted sample in eight. Audio buffs will recognize 
any additional aamples only improve the response, since they amount to 
oversampling the signal. Audio buffs will also recognize the need for 
zeal in avoiding undersampling, which is why the poll-adjust algorithm 
is so squirrely.

Dave

Danny Mayer wrote:

 Unruh wrote:
 
[EMAIL PROTECTED] (Danny Mayer) writes:


Unruh wrote:

Brian Utterback [EMAIL PROTECTED] writes:


Unruh wrote:

David L. Mills [EMAIL PROTECTED] writes:

You might not have noticed a couple of crucial issues in the clock 
filter code.

I did notice them all. Thus my caveate. However throwing away 80% of the
precious data you have seems excessive.

Note that the situation can arise that the one can wait many more than 8
samples for another one. Say sample i is a good one. and remains the best
for the next 7 tries. Sample i+7 is slightly worse than sample i and thus
it is not picked as it comes in. But the next i samples are all worse than
it. Thus it remains the filtered one, but is never used because it was not
the best when it came in. This situation could keep going for a long time,
meaning that ntp suddenly has no data to do anything with for many many
poll intervals. Surely using sample i+7 is far better than  not using any
data for that length of time.

On the contrary, it's better not to use the data at all if its suspect. 
ntpd is designed to continue to work well even in the event of loosing 
all access to external sources for extended periods.

And this could happen again. Now, since the
delays are presumably random variables, the chances of this happening are
not great ( although under a condition of gradually worsening network the
chances are not that small), but since one is running ntp for millions or
billions of samples, the chances of this happening sometime becomes large. 


There are quite a few ntpd servers which are isolated and once an hour 
use ACTS to fetch good time samples. This is not rare at all.

And then promplty throw them away because they do not satify the minimum
condition? No, it is not best to throw away data no matter how suspect.
Data is a preecious comodity and should be thrown away only if you are damn
sure it cannot help you. For example lets say that the change in delay is
.1 of the variance of the clock. The max extra noise that delay can cause
is about .01 Yet NTP will chuck it. Now if the delay is 100 times the
variance, sure chuck it. It probably cannot help you. The delay is a random
process, non-gaussian admitedly, and its effect on the time is also a
random process-- usually much closer to gaussian. And why was the figure of
8 chosen ( the best of the last 8 tries) why not 1? or 3? I suspect it
came off the top of someone's head-- lets not throuw away too much stuff,
since it would make ntp unseable, but lets throw away some to feel
virtuous. Sorry for being sarcastic, but I would really like to know what
the justification was for throwing so much data away.
 
 
 No, 8 was chosen after a lot of experimentation to ensure the best 
 results over a wide range of configurations. Dave has adjusted these 
 numbers over the years and he's the person to ask.
 
 Danny

___
questions mailing list
questions@lists.ntp.org
https://lists.ntp.org/mailman/listinfo/questions