Re: Longer leap second notice, was: Where the responsibility lies

2006-01-03 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Ed Davies writes:
Poul-Henning Kamp wrote:
 If we can increase the tolerance to 10sec, IERS can give us the
 leapseconds with 20 years notice and only the minority of computers
 that survive longer than that would need to update the factory
 installed table of leapseconds.

PHK can reply for himself here but, for the record, I think RS's
reading of what he said is different from mine.  My assumption is
that PHK is discussing the idea that leaps should be scheduled many
years in advance.  They should continue to be single second leaps -
just many more would be in the schedule pipeline at any given
point.

Obviously, the leap seconds would be scheduled on the best available
estimates but as we don't know the future rotation of the Earth this
would necessarily increase the tolerance.  In theory DUT1 would be
unbounded (as it sort of is already) but PHK is assuming that there'd
be some practical likely upper bound such as 10 seconds.

Am I right in this reading?

yes.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Longer leap second notice, was: Where the responsibility lies

2006-01-03 Thread Rob Seaman

On Jan 3, 2006, at 4:22 PM, Poul-Henning Kamp wrote:


In message [EMAIL PROTECTED], Ed Davies writes:

Poul-Henning Kamp wrote:

If we can increase the tolerance to 10sec, IERS can give us the
leapseconds with 20 years notice and only the minority of computers
that survive longer than that would need to update the factory
installed table of leapseconds.


PHK can reply for himself here but, for the record, I think RS's
reading of what he said is different from mine.  My assumption is
that PHK is discussing the idea that leaps should be scheduled many
years in advance.  They should continue to be single second leaps -
just many more would be in the schedule pipeline at any given
point.

Obviously, the leap seconds would be scheduled on the best available
estimates but as we don't know the future rotation of the Earth this
would necessarily increase the tolerance.  In theory DUT1 would be
unbounded (as it sort of is already) but PHK is assuming that there'd
be some practical likely upper bound such as 10 seconds.

Am I right in this reading?


yes.


I'm willing to entertain any suggestion that preserves mean solar
time as the basis of civil time.  One could view this notion as a
specific scheduling algorithm for leap seconds.  My own ancient
proposal (http://iraf.noao.edu/~seaman/leap) was for a tweak to the
current algorithm that would minimize the excursions between UTC and
UT1.  This suggestion is more than a tweak, of course, since it would
require increasing the 0.9s limit.  One could imagine variations,
however, with sliding predictive windows to balance the maximum
excursion against the look ahead time.  One is skeptical of any
advantage to be realized over the current simple leap second policy.

I continue to find the focus on general purpose computing
infrastructure to be unpersuasive.  If we can convince hardware and
software vendors to pay enough attention to timing requirements to
implement such a strategy, we can convince them to implement a more
complete time handling infrastructure.  This seems like the real goal
- one worthy of a concerted effort.  Instead of trying to escape from
the entanglements of this particular system requirement, why don't we
focus on satisfying it in a forthright fashion?

There is also the - slight - issue that we aren't only worried about
computers.  There is a heck of a lot of interesting infrastructure
that should be included in the decision making envelope.

In general, the strategy you describe could also be addressed as an
elaboration on the waveform we are attempting to model with our
clocks.  Not a constant cadence like tick-tick-tick-tick, that is,
but tick-tick-tock-tick.  I do think there might be some interesting
hay to be made by generalizing our definition of a clock to include
quasi-periodic phenomena more complicated than a once-per-second
delta function.  Would give us some reason to explore the Fourier
domain if nothing else.

Rob Seaman
National Optical Astronomy Observatory


Re: Longer leap second notice, was: Where the responsibility lies

2006-01-03 Thread Warner Losh
 I continue to find the focus on general purpose computing
 infrastructure to be unpersuasive.  If we can convince hardware and
 software vendors to pay enough attention to timing requirements to
 implement such a strategy, we can convince them to implement a more
 complete time handling infrastructure.  This seems like the real goal
 - one worthy of a concerted effort.  Instead of trying to escape from
 the entanglements of this particular system requirement, why don't we
 focus on satisfying it in a forthright fashion?

As someone who has fought the battles, I can tell you that a simple
table is 10x or 100x easier to implement than dealing with parsing the
data from N streams.  Sure, it limits the lifetime of the device, but
a 20 year limit is very reasonable.

I had one system that worked over the leap second correctly, even
though the code to parse the data from this specific brand of GPS
receiver hadn't been written yet.  It worked because it knew about the
leap second in a table that we'd included on our flash as a fallback
when we didn't know anything else.  If we could have a table for the
next 20 years, there'd be no need to even write the code to get from
the GPS stream :-).

I know you aren't pursuaded by such arguements.  I find your
dismissive attitude towards software professionals that have
implemented a complete leap second handling infrastructure, with
pluggable sources for leap second rather annoying :-(

Warner