> This idea pushes extra complexity into every implementation of low > level kernel-space software, firmware, and hardware. That's nice as a > policy for full employment of programmers, but it's hard to justify by > any other metric. Instead those low level places should be as simple > as possible, and that means making the underlying precision time > scale, and thus any broadcast distributions of a precision time scale, > as simple as possible.
How does this make those things more complex, though? Those things are already required to deal with both knowing about and adjusting the time for leap seconds (both adding and removing, though adding is the only direction that's ever been *tested*) ... increasing the frequency of the announcements doesn't really add complexity there, except in places where the code was already deficient (e.g. doesn't currently handle removing a leap second, which is a bug). If you wanted to do precise measurement of time between two dates, you'd have to know about the number of leap seconds in between, but a) that's easily done at a higher level than the kernel, and b) we already have to do that to deal with existing leap seconds, just adding more datapoints to the math doesn't actually make that more complex. Am I missing something obvious where this would add complexity that isn't already there? -j _______________________________________________ time-nuts mailing list -- [email protected] To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts and follow the instructions there.
