On Fri, 20 Mar 2009 15:06:35 +0100, SCHARITZER Gerald wrote:
>
>Server Time Protocol Implementation Guide (Draft)
>Chapter 2 Operations
>2.2 Operations in an STP-only CTN
>2.2.6 Time management
>Leap second considerations
>
>http://www.redbooks.ibm.com/redpieces/abstracts/sg247281.html
>
><snip>
>Operating system and subsystem components use the STCK time format
>because this is not subject to either Leap Second Offset or time-zone
>offset changes. Two successive invocations of the Assembler TIME macro
>in STCK format yield different results, and the second result is later
>than the first result.
> ...
>During the implementation of a positive leap second offset change, z/OS
>becomes non-dispatchable for the duration of the delta between the
>current leap second offset and the new leap second offset in order to
>insert the delta between STCK time and UTC time.
>
Good enough. But there might remain an infinitesimal timing hazard.
Suppose that 23:59:59.999... some process (which might be code
invoked by the TIME macro) does:
STCK X
* Now the leap second occurs; the process is interrupted; z/OS
* becomes non-dispatchable for one second, during which one second
is added to CVTLSO, then the interrupted process is redispatched:
LG R0,X
SG R0,CVTLSO
... and the time conversion proceeds. But the value of CVTLSO
is one second too large to correspond to the STCK value, and
the converted time is 23:59:58.999... and might be out of order
with time stamps obtained during the fraction of a second prior
to the STCK. Swapping the order of STCK and access to to CVTLSO
merely moves the problem to the other edge of the leap second.
The only solutions I see are:
AGAIN LG R0,CVTLSO
STCK X
CG R0,CVTLSO
BNE AGAIN
LCGR R0,R0
AG R0,X
... or disable interrupts for the sequence. (Does the TIME
macro code execute disabled anyway?) Either is an enormous
overhead for a hazard that persists only for the duration of
one instruction scarcely once a year. OTOH, if the problem
ever occurs, it's difficult to diagnose and even harder to
to reproduce. I wonder what the time macro does? I haven't
the source code.
>Not that I like the thought of the entire OS freezing for one (or even
>more) seconds just because of a change in the offset between UTC and
I've heard of a MICR reader whose mechanical operation is very
time-critical, for example.
>TAI. So the root of all evil in this case is using some kind of
>time-of-day timestamps rather than raw clock values, which simply count
>the ticks since 0001-01-01 00:00:00 (plus providing a programmable
>field).
>
Alternatively, one could say that the root of all evil is implementation
of the TIME macro, incomplete in failing to return correct values of
23:59:60.hh during positive leap seconds.
>At least z/OS can handle simply running the hardware clock in sync with
>TAI and let the software do all the UTC, DST, leap year, leap second and
>local time interpretation, which is more than what is provided by
>Unix/POSIX time.
>
True. Actually, in some sense the hardware clock runs at TAI - 10 seconds.
To wit, IERS says TAI - UTC is currently 34 seconds; PoOp says 24.
I still suspect that a smoothed UT1 might be a more practical time
convention for computers than UTC.
Thanks,
gil
----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: GET IBM-MAIN INFO
Search the archives at http://bama.ua.edu/archives/ibm-main.html