On Wed 2003/01/29 15:53:08 CDT, William Thompson wrote
in a message to: [EMAIL PROTECTED]

>Any application which seeks to calculate the difference in time between two
>events recorded in UTC time needs to know if there are any leap seconds
between
>the start and stop time.  For example, suppose you were studying solar flares,
>and analyzing some data taken in 1998, and you saw a burst of hard X-rays at
>23:59:53 UT on Dec 31, followed by a rise in EUV emission at 00:00:10 UT the
>next day.  You'd think that the delay time between the two would be 17
seconds,
>but it's really 18 seconds because of the leap second introduced that day.
>That's a vital difference for the scientific analysis of the data.

It is instructive to look at this from the 86400+epsilon point-of-view.

In that scenario, there would be no leap-seconds but a proper calculation
of the time difference would always require epsilon to be considered.
Thus the time span would be  17+epsilon seconds.  However, the error
introduced by ignoring epsilon (currently 2ms) would be 1 part in 10,000
rather than 1 in 18 by ignoring the leap-second.

Over longer timespans the fractional error would decrease progressively
and flatten off after about 18 months to roughly 1 part in 10^8.  Only
very high precision measurements would care about such an error.

The only way to produce a large fractional error would be to difference
two times a few millisec on either side of the change-of-day, an
improbably occurrence.

Mark Calabretta
ATNF

Reply via email to