Jochen wrote such a nice synopsis that I can only add my vote for a
single write of the average drift over a long time period, i.e. somthing
like this:
a) Collect and average the values that would have been written every
hour, then write this out to the file system after 24 or 48 hours, i.e.
long enough to average out any day/night or AC changes.
b) Write a new value every week, but only if the average has changed sp
much that the control loop can overshoot after a full restart.
The latter value can probably be calculated:
We know that a modern ntpd will handle a missing drift file much better
than a badly wrong one, so lets start with a drift file which is 100 ppm
in error:
This results in an offset of 26 ms in 4 polling period of 64 seconds
each, so even with a default minpoll 6 value we will not go outside the
128 ms limit before active steering fixes the bad drift value.
Terje
Jochen Bern wrote:
On 03/06/2015 10:35 AM, Harlan Stenn wrote:
A while ago we got a request from the embedded folks asking for a way to
limit the writing of the drift file unless there was a "big enough"
change in the value to warrant this.
[...]
I'm wondering if we should just let folks specify a drift/wander
threshold, and if the current value is more than that amount we write
the file, and if the current value is less than that amount we don't
bother updating the file. If folks are on a filesystem where the number
of writes doesn't matter, no value would be set (or we could use 0.0)
and it's not an issue.
Thoughts?
*Thoughts* I have, but no clear conclusion, I'm afraid ...
0. There's "limiting" the write ops, and then there's being all out to
avoid them. Saying that the value should *never* be written unless the
difference exceeds the threshold suggests the latter, is that actually
the request? From a sanity POV, *some* timeout (say, a month) and/or
writing triggered on orderly shutdowns sound like something we'ld want
to do.
1. What about *appending* to the file (up to some length limit) instead
of overwriting the exact same bytes within it? Is that something that
flash RAM and its specialized fs'es can handle better?
2. What's actually the worst-case scenario here? Let's assume a unit
whose drift is correlated with the 24h temperature cycle, -6 ppm at
daybreak, +6 ppm in the early afternoon, and the limit is a delta of 10
ppm. Now, if the drift file gets initially written with an intermediate
value of abs(x)<4, it'll *never* get rewritten - but otherwise, there
will be two writes per day for all eternity, as the mechanism doesn't
allow the stored value to ever gravitate to the middle ground. Is that
something that should be taken care of?
3. What's the purpose of that stored value? IIUC ntpd only ever reads it
on startup, and the inherent assumption is that it is a fairly *RECENT*
drift value that ntpd can assume to be a proper approximation of the
*current* drift, and compensate for it. With the new mechanism, the
actual current drift is somewhere within +/-limit of the stored value.
Is that still useful, as in minimizing the offset that the starting ntpd
will pick up until it has obtained a drift estimate of its own? Or would
it be better to have it start with some sort of *average* value of the
drift, rather than a "current" value that actually isn't ... ?
Regards,
J. Bern
--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
_______________________________________________
questions mailing list
[email protected]
http://lists.ntp.org/listinfo/questions