On Thu, 4 May 2017 07:53:24 -0500, Dana Mitchell wrote:

>This has been discussed before and is explained very well in this IBM techdoc:
>
>https://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102081
>
>Depending on if you need to be exactly on time after the leap second occurs, 
>or can tolerate taking a while to 'smear' the time to the new value,  STP 
>gives you the choice of spinning whilst waiting for an extra leap second to be 
>inserted (Category 1)   or slowly steering the time (Category 2):
>
>STP will begin to slowly steer the mainframe time to the new value. It takes 
>approximately 7 hours for STP to steer out a one second delta.
>
That depends on *not* running the (E)TOD clock at the TAI rate and with
the 10-second offset that is otherwise IBM's recommendation.

Amazon and Google have the pragmatic approach of a smear centered on the
leap secoond, making the maximum deviation from UTC a half second rather
than a full second.  I suppose this could be achieved with the HMC/STP by
using Google or Amazon as a reference.  If they come to agree on the interval.

Steering the TOD clock would break applications (are there any?) that depend
on microsecond accuracy of STCK.

Making multiple microscopic adjustments to CVTLSO throughout a smearing
interval has other sorts of complexity.

Why does a 24-hour adjustment for a leap year cause less disruption than
a single leap second?

-- gil

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO IBM-MAIN

Reply via email to