On 2015-06-01 02:46 AM, Poul-Henning Kamp wrote:
--------
In message<[email protected]>, Brooks Harris writes:
Multiply this by 250 million [1] PC's still happily running XP
and you can better understand why Microsoft hasn't been that
interested in leap seconds, NTP, or participating in the hh:59:60
timestamp nightmare.
Yes, they've got a very large number of badly administrated systems in
the field. In more tightly administrated systems it can be done better.
But its all "good enough" for current purposes.
That's not as obvious as you seem to think.
I meant for typical deployments, that is, most Windows machines are
sitting at home, on a desk at work, etc. The timekeeping need only be
good enough to keep timestamps moving forward and it works well enough
for Windows to have achieved a commanding market share in those areas.
Windows machines in data centers or more tightly administrated systems
may be doing better timing.
A lot of Windows machines are doing things where you would expect
people to care about leap-seconds: Nuclear power plants control
systems, Air Traffic Control computers, Surgery robots, Patient
Monitors, Power grid disturbance detectors etc. etc. etc.
In many of those uses the PC is not doing the mission critical timing.
No event-driven multitasking OS can do precise timing - you need a
real-time OS or hardware/firmware to do that. Windows on a PC can't do
(highly) precise or accurate time by itself - it needs some sort of
hardware assist. And even then its timestamps are subject to the
behavior of the specific hardware and the OS's thread scheduler.
Fact is that most of the people involved in these systems have no idea
what a leap-second is, and more crucially: Once they learn that, they
have no idea what the system they designed will do when one happens.
It would make sense, like Google and Amazon, that their in-house
data centers would want to more precisely and deterministically
handle leap seconds. But note all three companies have decided to
jump or smear time instead of creating a true leap second.
As I understand it its not that they are interested in "precise" or
"accurate" time - they are interested in smoothing over the Leap Second
to avoid problems potentially caused by the Leap Second jump in the many
OSs running in the data centers.
They are very much interested in both *precise* and *accurate* time,
Sure, but
that's why they have to do something in the first place.
I don't think that's true. The reason they "smear" is to hide the Leap
Second from parts of the system that might have a bug evoked by the
change. Its not for accurate time - in fact its explicitly compromising
accurate time to protect the system from failures.
Time, technology and leaping seconds
http://googleblog.blogspot.com/2011/09/time-technology-and-leaping-seconds.html
If they were not interested in good timekeeping, they could just
let the computers free-run their clocks and pretend this is the 1980ies.
And yes, the smoothing and ramping and steps are all attempts to
win predictability at the expense of accuracy,
Right, as above, but
when faced with at
huge amount of software written by the kind of people mentioned above.
But this is not something they are happy about doing, much less
proud of doing, but weighing the risks of "heterogeneous" leap-second
handling and the risk of being up to half a second wrong about time
for most of a day, they picked the second risk.
The failures folks are frightened of are bugs evoked by the Leap Second.
At least some of which are just "stupid" bugs, like threading races when
outputting the Leap Second event to the system log, not basic
timekeeping calculation errors. If all parts of the system did POSIX and
NTP correctly the timekeeping would not reflect UTC correctly because
neither POSIX or NTP do that anyway, but the systems wouldn't hang or
crash. As it is they have to "smear" to minimize the problems.
-Brooks
_______________________________________________
LEAPSECS mailing list
[email protected]
https://pairlist6.pair.net/mailman/listinfo/leapsecs