On Mon, Jun 07, 2010 at 11:59:38PM -0700, Ask Bjørn Hansen wrote:
> I am happy to provide a dump of the measurement data to anyone with
> skills in statistics who wants to propose a better scoring
> mechanism.
> 
> I don't think this particular server is a big problem.  I am more
> interested in minimizing having servers in the pool that are or soon
> will be unavailable (or serve really bad time).
> 
> A few hundred ms isn't a big deal.    NTP clients will compensate
> and SNTP clients should be happy enough. 

In my experience the most problematic servers are ones with unstable
clock frequency, like this one:

http://www.pool.ntp.org/scores/89.187.142.55

It seems that the clock is reset every time it reaches the 128ms ntpd
limit, so the offset stays quite low and the server may not be marked
as falseticker in the client's source selection algorithm.

I'd propose including standard deviation of the measured offset in the
score calculation.

Maybe something like this:
- calculate the mean and std dev of the samples collected over last 24 hours
- score = 20 - mean * mean * 40 - stddev * 1000

Some values to reach score 10 would be: 0.5s mean and zero stddev,
zero mean and 10ms stddev, 0.1s mean and 9.6ms stddev.

What do you think?

-- 
Miroslav Lichvar
_______________________________________________
pool mailing list
[email protected]
http://lists.ntp.org/listinfo/pool

Reply via email to