# Re: Introduction of long term scheduling

```Rob Seaman wrote:
```
```...  Obviously it would take at least N years to introduce a new
reporting requirement of N years in advance (well, N years minus six
months).
```
```
Sorry, maybe I'm being thick but, why?  Surely the IERS could announce
all the leap seconds in 2007 through 2016 inclusive this week then
those for 2017 just before the end of this year, and so on.  We'd have
immediate 10 year scheduling.

```
```I suspect it would be exceptionally interesting to
everyone, no matter what their opinion on our tediously familiar
issues, to know how well these next seven or so leap seconds could be
so predicted, scheduled and reported.
```
```
Absolutely, it would be very interesting to know.  I suspect though,
that actually we (the human race) don't have enough data to really
know a solid upper bound to possible error and that any probability
distribution would really be not much more than an educated guess.

Maybe a few decades of detailed study has not been enough to see
wilder swings - to eliminate the unknown unknowns, if you like.

```
```If the 0.9s limit were to be
relaxed - how much must that be in practice?  Are we arguing over a
few tenths of a second coarsening of the current standard?  That's a
heck of a lot different than 36,000 tenths.
```
```
Maybe we can turn this question round.  Suppose the decision was made
to simplistically schedule a positive leap second every 18 months for
the next decade, what would be the effect of the likely worst case
error?  First, what could the worst case error be?  Here's my guess.
If it turned out that no leap seconds were required then we'd be 6
seconds out.  If we actually needed one every nine months we'd be out
by about 6 seconds the other way.  So the turned around question would
be: assuming we are going to relax the 0.9 seconds limit, how much of
an additional problem would it be if it was increased by a factor of
10 or so, in the most likely worst case?

As Rob has pointed out recently on the list, 1 second in time equates
to 15 seconds of arc in right ascension at the celestial equator for
telescope pointing.  Nine seconds in time is therefore 2.25 arc
minutes.  For almost all amateur astronomers this error would be
insignificant as it's smaller than their field of view with a normal
eyepiece but, more importantly, the telescope is usually aligned by
pointing at stars anyway rather than by setting the clock at all
accurately.  For the professionals I'm not so sure but, for context,
Hubble's coarse pointing system aims the telescope to an accuracy of
about 1 arc minute before handing off control to the fine guidance
sensors.

For celestial navigation on the Earth, a nine second error in time
would equate to a 4.1 km error along the equator.  Worth considering.

My guess would be that there would be applications which would need
to take account of the difference which currently don't.  Is it really
likely to be a problem, though?

Remember that this is not a secular error, by the end of, say, 2009
we'd be beginning to get an idea of how things are going and would be
able to start feeding corrections into the following decade.

So, while it would be nice to know a likely upper bound on the
possible errors, is a back of an envelope guess good enough?

Happy perihelion,

Ed.
```