Re: A lurker surfaces

2007-01-01 Thread Ashley Yakeley

On Dec 30, 2006, at 17:41, Jim Palfreyman wrote:


The earlier concept of rubber seconds gives me the creeps and I'm
glad I wasn't old enough to know about it then!


I rather like the idea, though perhaps not quite the same kind of
rubber as was used.

I'd like to see an elastic civil second to which SI nanoseconds are
added or removed. Perhaps this could be done annually: at the
beginning of 2008, the length of the civil second for the year 2009
would be set, with the goal of approaching DUT=0 at the end of 2009.
This would mean no nasty unusualities, and match the common
intuition that a second is a fixed fraction of a day. If NTP were to
serve up this sort of time, I think one's computer timekeeping would
be quite stable. And of course this will work forever, long after
everyone else is fretting over how to insert a leap-hour every other
week, or whatever. Software should serve human needs, not the other
way around. Anyone needing fixed seconds should use TAI.

Actually I was going to suggest that everyone observe local apparent
time, and include location instead of time-zone, but I think that
would make communication annoying.

--
Ashley Yakeley


Happy New Year!

2007-01-01 Thread Rob Seaman

Rather than reply in detail to the points raised in the latest
messages - believe me, you've heard before what I was going to say
again - I'd simply like to wish everybody a happy new year.  I am
grateful to everybody who has ever contributed to this list and
consider it a mark of the importance of civil timekeeping that the
conversation continues.

Since there are new voices on the list, I might simply direct
interested readers to my own thoughts, unchanged at their core in
more than five years:

   http://iraf.noao.edu/~seaman/leap

In short, the current standard has a lot of life left in it.

That said, I have no problem whatsoever with schemes that lengthen
the six month reporting requirements to several years.  Steve's five
year plan, recently quoted again, or the decadal scheduling that has
become something of a standard talking point on the list, are each
already entirely legal under the standard if the 0.9s limit on DUT1
could be extrapolated that far in advance.

Perhaps someone on the inside could comment on the current state of
the art of multiyear predictions?  The most notable feature of leap
second scheduling has been the seven year gap from New Year's Eve
1998 to New Year's Eve 2005.  Otherwise figure 1 from my link above
(and attached below) shows a phenomenological slope close to 7 leap
seconds per decade.  The question is not whether significant
excursions are seen from this general trend - the question is how
well can they be predicted.  Looking at my figure 2 (you'll have to
click through for this one), one will see that a vast improvement in
the state of the art of making short term predictions has occurred
since Spiro Agnew had his the residence at the USNO.

Nobody should be surprised to learn that I will continue resolutely
to oppose the embarrassing and absurd notion of embargoing every 3600
leap seconds into a so-called leap hour.  Why 3600?  Since this
represents an intercalary period - that appears, only to promptly
disappear from the clocks of the world - why not any other random
number of seconds?  How about 1000?  Or 1066, to commemorate the
theft of UTC from its Greenwich origins just like the Normans stole
England from the Saxons who stole it from the Celts?  Some may join
me in thinking 666 might be the appropriate embargo.

Bottom line - nothing about the current standard forbids scheduling
(and reporting, of course) multiple leap seconds several years in
advance.  Obviously it would take at least N years to introduce a new
reporting requirement of N years in advance (well, N years minus six
months).  I suspect it would be exceptionally interesting to
everyone, no matter what their opinion on our tediously familiar
issues, to know how well these next seven or so leap seconds could be
so predicted, scheduled and reported.  If the 0.9s limit were to be
relaxed - how much must that be in practice?  Are we arguing over a
few tenths of a second coarsening of the current standard?  That's a
heck of a lot different than 36,000 tenths.

Rob Seaman
NOAO
--







Re: A lurker surfaces

2007-01-01 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Ashley Yakeley [EMAIL PROTECTED] writes:
: Software should serve human needs, not the other
: way around. Anyone needing fixed seconds should use TAI.

I think this idea would be harder to implement than the current
leapseconds.

There are many systems that need to display UTC externally, but need
to operate on a tai-like timescale internally.  Having there being a
sliding delta between them would be a nightmare.

Warner


Re: Introduction of long term scheduling

2007-01-01 Thread Ed Davies

Rob Seaman wrote:

...  Obviously it would take at least N years to introduce a new
reporting requirement of N years in advance (well, N years minus six
months).


Sorry, maybe I'm being thick but, why?  Surely the IERS could announce
all the leap seconds in 2007 through 2016 inclusive this week then
those for 2017 just before the end of this year, and so on.  We'd have
immediate 10 year scheduling.


I suspect it would be exceptionally interesting to
everyone, no matter what their opinion on our tediously familiar
issues, to know how well these next seven or so leap seconds could be
so predicted, scheduled and reported.


Absolutely, it would be very interesting to know.  I suspect though,
that actually we (the human race) don't have enough data to really
know a solid upper bound to possible error and that any probability
distribution would really be not much more than an educated guess.

Maybe a few decades of detailed study has not been enough to see
wilder swings - to eliminate the unknown unknowns, if you like.


If the 0.9s limit were to be
relaxed - how much must that be in practice?  Are we arguing over a
few tenths of a second coarsening of the current standard?  That's a
heck of a lot different than 36,000 tenths.


Maybe we can turn this question round.  Suppose the decision was made
to simplistically schedule a positive leap second every 18 months for
the next decade, what would be the effect of the likely worst case
error?  First, what could the worst case error be?  Here's my guess.
If it turned out that no leap seconds were required then we'd be 6
seconds out.  If we actually needed one every nine months we'd be out
by about 6 seconds the other way.  So the turned around question would
be: assuming we are going to relax the 0.9 seconds limit, how much of
an additional problem would it be if it was increased by a factor of
10 or so, in the most likely worst case?

As Rob has pointed out recently on the list, 1 second in time equates
to 15 seconds of arc in right ascension at the celestial equator for
telescope pointing.  Nine seconds in time is therefore 2.25 arc
minutes.  For almost all amateur astronomers this error would be
insignificant as it's smaller than their field of view with a normal
eyepiece but, more importantly, the telescope is usually aligned by
pointing at stars anyway rather than by setting the clock at all
accurately.  For the professionals I'm not so sure but, for context,
Hubble's coarse pointing system aims the telescope to an accuracy of
about 1 arc minute before handing off control to the fine guidance
sensors.

For celestial navigation on the Earth, a nine second error in time
would equate to a 4.1 km error along the equator.  Worth considering.

My guess would be that there would be applications which would need
to take account of the difference which currently don't.  Is it really
likely to be a problem, though?

Remember that this is not a secular error, by the end of, say, 2009
we'd be beginning to get an idea of how things are going and would be
able to start feeding corrections into the following decade.

So, while it would be nice to know a likely upper bound on the
possible errors, is a back of an envelope guess good enough?

Happy perihelion,

Ed.


Re: Introduction of long term scheduling

2007-01-01 Thread Steve Allen
On Mon 2007-01-01T17:42:11 +, Ed Davies hath writ:
 Sorry, maybe I'm being thick but, why?  Surely the IERS could announce
 all the leap seconds in 2007 through 2016 inclusive this week then
 those for 2017 just before the end of this year, and so on.  We'd have
 immediate 10 year scheduling.

For reasons never explained publicly this notion was shot down very
early in the process of the WP7A SRG.  It would almost certainly
exceed the current 0.9 s limit, and in so doing it would violate the
letter of ITU-R TF.460.

The IERS may not be a single entity so much as a confederation of
organizations competing for scientific glory and using the umbrella to
facilitate funding from each of their national governments.  Even if
the IERS were monolithic they would have to obtain approval for such a
change from the ITU-R, IAU, IUGG, and FAGS.  Given the tri/quadrennial
meeting schedules it seems unlikely that the IERS could obtain
approval much before year 2010.

 Maybe we can turn this question round.  Suppose the decision was made
 to simplistically schedule a positive leap second every 18 months for
 the next decade, what would be the effect of the likely worst case
 error?  First, what could the worst case error be?

McCarthy pretty much answered this question in 2001 as I reiterate here
http://www.ucolick.org/~sla/leapsecs/McCarthy.html

 As Rob has pointed out recently on the list, 1 second in time equates
 to 15 seconds of arc in right ascension at the celestial equator for
 telescope pointing.
...
 For the professionals I'm not so sure but

Give us a few years of warning and I think we can cope.  No telescope
I know uses ICRS, we're all still using FK5 and/or FK4.  That means we
astronomers already know (or at least ought to know *) that we all
have to do a software update.

 For celestial navigation on the Earth, a nine second error in time
 would equate to a 4.1 km error along the equator.  Worth considering.

The format of the almanacs would be changed along with the change
in UTC such that by including one more addition there would be
no worse error than now.  This would be a change much smaller in
magnitude than what the Admiralty did in 1833.

 Is it really likely to be a problem, though?

I think not.  It's hard to prove not.
None of the agencies involved has the funding to mount a survey
which would motivate all affected parties to investigate.

(*) While standing near the UTC poster at ADASS I was accosted by a
software engineer whose PI had instructed that all observation times
be reduced to heliocentric UTC.  Upon discussion it became clear
that the PI had not clearly distinguished between heliocentric and
barycentric.  Furthermore, there was no concept that UTC is only
defined at the surface of the earth and that the only suitable time
scales are TCB and TDB.  (TDB would be the natural result because
ticks along with UTC and because that's what the JPL ephemerides use.)
The need for pedagogy never ends.

--
Steve Allen [EMAIL PROTECTED]WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-01 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Steve Allen writes:

McCarthy pretty much answered this question in 2001 as I reiterate here
http://www.ucolick.org/~sla/leapsecs/McCarthy.html

What exactly is the Y axis on this graph ?

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-01 Thread Steve Allen
On Mon 2007-01-01T19:29:19 +, Poul-Henning Kamp hath writ:
 McCarthy pretty much answered this question in 2001 as I reiterate here
 http://www.ucolick.org/~sla/leapsecs/McCarthy.html

 What exactly is the Y axis on this graph ?

Only McCarthy can say for sure.
Maybe someone elsewho was at the GSIC meeting could give a better idea.

My impression is that McCarthy generated a pseudorandom sequence of
LOD values based on the known power spectrum of the LOD fluctutations
and then applied the current UT1 prediction filters to that to see
how wrong UT1-UTC was likely to get.  I suspect it was a rather
back of the envelope kind of calculation that was not repeated
because the notions of scheduling that it posited were shot down.

As a routine matter of operation the IERS would undoubtedly want
to put some effort into verifying that new software for making such
predictions was well reviewed and tested.

Oh, and the lawyer in me just asserted a loophole in my previous post.

One could say that it was never possible for the BIH/IERS to guarantee
that its leap second scheduling could meet the 0.7 s and then later
0.9 s specification because they could not be held responsible for
things that the earth might do.  As such the IERS could conceivably
start unilaterally issuing full decade scheduling of leap seconds and
claim that it *was* acting in strict conformance with ITU-R TF.460.

In civil matters this is the sort of action which would later be
tested in court if it were found to have adverse effects.  In the
matter of earth rotation it seems unlikely that there could be any
penalties, and if there were a general consensus that this be the
right thing to do then the IERS could probably act with impunity in
advance of official approval from all agencies.

--
Steve Allen [EMAIL PROTECTED]WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-01 Thread Magnus Danielson
From: Poul-Henning Kamp [EMAIL PROTECTED]
Subject: Re: [LEAPSECS] Introduction of long term scheduling
Date: Mon, 1 Jan 2007 19:29:19 +
Message-ID: [EMAIL PROTECTED]

Poul-Henning,

 In message [EMAIL PROTECTED], Steve Allen writes:

 McCarthy pretty much answered this question in 2001 as I reiterate here
 http://www.ucolick.org/~sla/leapsecs/McCarthy.html

 What exactly is the Y axis on this graph ?

Unless you have a subtle point, I interprent it to be in seconds even if they
are incorrectly indicated (s or seconds instead of sec would have been
correct).

If you have subtle point, I'd love to hear it.

Cheers,
Magnus


Re: Introduction of long term scheduling

2007-01-01 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Steve Allen writes:

One could say that it was never possible for the BIH/IERS to guarantee
that its leap second scheduling could meet the 0.7 s and then later
0.9 s specification because they could not be held responsible for
things that the earth might do.  As such the IERS could conceivably
start unilaterally issuing full decade scheduling of leap seconds and
claim that it *was* acting in strict conformance with ITU-R TF.460.

Considering that ITU has no power over IERS, IERS is only bound
by the letter of TF.460 as far as they have volutarily promised
to be, and consequently, they could just send a letter to ITU
and say we'll do it this way from MMDD, if you disagree,
then figure something else out.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-01 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Magnus Danielson wr
ites:
From: Poul-Henning Kamp [EMAIL PROTECTED]
Subject: Re: [LEAPSECS] Introduction of long term scheduling
Date: Mon, 1 Jan 2007 19:29:19 +
Message-ID: [EMAIL PROTECTED]

Poul-Henning,

 In message [EMAIL PROTECTED], Steve Allen writes:

 McCarthy pretty much answered this question in 2001 as I reiterate here
 http://www.ucolick.org/~sla/leapsecs/McCarthy.html

 What exactly is the Y axis on this graph ?

Unless you have a subtle point, I interprent it to be in seconds even if they
are incorrectly indicated (s or seconds instead of sec would have been
correct).

If you have subtle point, I'd love to hear it.

Not even close to a subtle point, I simply cannot figure out what the
graph shows...

The sawtooth corresponding to the prediction interval raises a big red
flag for me as to the graphs applicability to reality.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-01 Thread Ed Davies

Poul-Henning Kamp wrote:

If you have subtle point, I'd love to hear it.


Not even close to a subtle point, I simply cannot figure out what the
graph shows...


Me too.  Is this an analysis or a simulation?  What are the
assumptions?  What predicted intervals does he mean?

The bullet points above are very confusing as well.

What does large discontinuities possible mean?  Ignoring
any quibble about the use of the word discontinuities,
does he mean more than one leap second at a particular event?
Why would anybody want to do that? - at least before we're
getting to daily leap seconds which is well off to the right
of his graph (50 000 years, or so, I think).

Why does the One sec at predicted intervals line suddenly
diverge in the early 2500's when the other lines seem to just
be expanding in a sensible way?

Ed.


Re: Introduction of long term scheduling

2007-01-01 Thread Ed Davies

Steve Allen wrote:

On Mon 2007-01-01T17:42:11 +, Ed Davies hath writ:

Sorry, maybe I'm being thick but, why?  Surely the IERS could announce
all the leap seconds in 2007 through 2016 inclusive this week then
those for 2017 just before the end of this year, and so on.  We'd have
immediate 10 year scheduling.


For reasons never explained publicly this notion was shot down very
early in the process of the WP7A SRG.  It would almost certainly
exceed the current 0.9 s limit, and in so doing it would violate the
letter of ITU-R TF.460.


Yes, I was assuming exceeding the 0.9 s limit, as I'm sure the rest
of my message made clear.  We are discussing this as an alternative
to, for all intents and purposes, scrapping leaps altogether and
blowing the limit for all time, so I don't see this as a problem.

Ed.


Re: Introduction of long term scheduling

2007-01-01 Thread Steve Allen
On Mon 2007-01-01T21:19:04 +, Ed Davies hath writ:
 Why does the One sec at predicted intervals line suddenly
 diverge in the early 2500's when the other lines seem to just
 be expanding in a sensible way?

Upon looking closer I see a 200 year periodicity in the plot.
I begin to suspect that rather than run a pseudorandom sequence of LOD
based on the power spectrum he instead took the past 2 centuries of
LOD variation around the linear trend and just kept repeating those
variations added to an ongoing linear trend.

I suspect that the divergence of the one line indicates that the LOD
has become long enough that 1 s can no longer keep up with the
divergence using whatever predicted interval he chose.  I suspect that
the chosen interval was every three months, for it is in about the
year 2500 that the LOD will require 4 leap seconds per year.

As for the other questions, McCarthy had been producing versions of this
plot since around 1999, but the published record of them is largely
in PowerPoint.  Dr. Tufte has provided postmortems of both  Challenger
and Columbia as testaments to how little that medium conveys.

--
Steve Allen [EMAIL PROTECTED]WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: A lurker surfaces

2007-01-01 Thread Michael Sokolov
Ashley Yakeley [EMAIL PROTECTED] wrote:

 I'd like to see an elastic civil second to which SI nanoseconds are
 added or removed.

Ditto!  I have always been in favor of rubber seconds, and specifically
civil second.  I believe that the *CIVIL* second should have its own
definition completely and totally independent of the SI second.

Civil time independent of physical time would solve all problems.  The
scale of civil time should be defined as a continuous real number scale
of *angle*, not physical time.  It would solve the problem of needing to
measure time intervals while at the same time synchronising with the
civil calendar.  Civil time interval is defined as the clock on the
Kremlin tower turning by a given angle.  Define one second of civil time
as the hour hand turning by 30 seconds of arc.

The people who complain about leap seconds screwing up their interval
time computations are usually told to use TAI.  They retort that they
need interval time *between civil timestamps*.  To me that seems like
what they are really measuring as interval time is not physical
interval time, but how much time has elapsed *in civil society*.  Hence
my idea of civil interval time that's completely decoupled from physical
time and is instead defined as the turning angle of the clock on the
Kremlin tower.

Flame deflector up

MS


Re: A lurker surfaces

2007-01-01 Thread John Cowan
Michael Sokolov scripsit:

 The people who complain about leap seconds screwing up their interval
 time computations are usually told to use TAI.  They retort that they
 need interval time *between civil timestamps*.  To me that seems like
 what they are really measuring as interval time is not physical
 interval time, but how much time has elapsed *in civil society*.

I think this point is quite sound, but I don't quite see what
its implications are (or why it makes rubber seconds better than
other kinds of adjustments).

--
John Cowan   http://ccil.org/~cowan[EMAIL PROTECTED]
We want more school houses and less jails; more books and less arsenals;
more learning and less vice; more constant work and less crime; more
leisure and less greed; more justice and less revenge; in fact, more of
the opportunities to cultivate our better natures.  --Samuel Gompers


Re: A lurker surfaces

2007-01-01 Thread Ashley Yakeley

On Jan 1, 2007, at 17:03, John Cowan wrote:


Michael Sokolov scripsit:


The people who complain about leap seconds screwing up their interval
time computations are usually told to use TAI.  They retort that they
need interval time *between civil timestamps*.  To me that seems like
what they are really measuring as interval time is not physical
interval time, but how much time has elapsed *in civil society*.


I think this point is quite sound, but I don't quite see what
its implications are (or why it makes rubber seconds better than
other kinds of adjustments).


One implication is that a leap second insertion is a second of real
time, but zero seconds of intuitive civil time.

Rubber seconds are appropriate because we have rubber days. People
who need absolute time have their own timescale based on some
absolute unit (the SI second), but to everyone else, the second is
a fraction of the day.

--
Ashley Yakeley


Re: A lurker surfaces

2007-01-01 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
[EMAIL PROTECTED] (Michael Sokolov) writes:
: The people who complain about leap seconds screwing up their interval
: time computations are usually told to use TAI.  They retort that they
: need interval time *between civil timestamps*.

Actaully, interval deltas are needed, but it is in TAI time.  There
needs to be, in the systems I've done, a timescale without leapseconds
to keep all the measurements sane.  In addition, many of the systems
I've done have other functions they do also, some of which need to
present UTC to the user.  As much of a pita that leapseconds are
today, having the converion between TAI and UTC (or put another way:
between GPS and UTC or LORAN and UTC) would make these systems
signficantly more difficult to get correct.

There's also times when our systems take in events that are
timestamped using UTC time, so we need to back correlate them to TAI
or whatever internal timescale we're using.  Some of that is because
UTC is the standard for exchanging time, part of that is because the
events in question are measured in whatever timescale is present on an
NTP server.

: To me that seems like
: what they are really measuring as interval time is not physical
: interval time, but how much time has elapsed *in civil society*.  Hence
: my idea of civil interval time that's completely decoupled from physical
: time and is instead defined as the turning angle of the clock on the
: Kremlin tower.

Actually, you are wrong.  The intervals is in the number of pps that
have happened, or fractions thereof.  Civil time does intrude because
that's what people use right now to know time fo day.  In the systems
I've done, you need to know both.

The requirements, as you may have noticed, for my needs aren't that
the intervals be done in TAI (we use a variant of LORAN time due to
historical accident, but with a 1970 epoch), but rather that UTC and
these time scales be convertable between one another.

Like I've said a number of times, saying 'just use TAI' isn't viable
because of the conversion issue.  Using TAI (or something with the no
leapsecond regualrity that TAI gives) is necessary for the algorithms
to work.  However, external pressures also require that some things be
done in UTC time and that some of the external sources of data use UTC
time and that needs to be back correlated to the internal timescale
that we're using.

The algorithms, btw, basically integrate frequencies of different
clocks, over time, to predict phase difference.  In this case, you
definitely want to use an interval, and not whatever weirdnesses civil
time happened to do in that interval.  Using an civil time interval
would introduce errors in algorithms.  These algorithms are used to
estimate how well the clocks are doing, but other parts of the system
need to play our UTC for things like NTPD and IRIG.

Warner


Re: A lurker surfaces

2007-01-01 Thread Magnus Danielson
From: Michael Sokolov [EMAIL PROTECTED]
Subject: Re: [LEAPSECS] A lurker surfaces
Date: Mon, 1 Jan 2007 22:22:23 GMT
Message-ID: [EMAIL PROTECTED]

 Ashley Yakeley [EMAIL PROTECTED] wrote:

  I'd like to see an elastic civil second to which SI nanoseconds are
  added or removed.

 Ditto!  I have always been in favor of rubber seconds, and specifically
 civil second.  I believe that the *CIVIL* second should have its own
 definition completely and totally independent of the SI second.

 Civil time independent of physical time would solve all problems.  The
 scale of civil time should be defined as a continuous real number scale
 of *angle*, not physical time.  It would solve the problem of needing to
 measure time intervals while at the same time synchronising with the
 civil calendar.  Civil time interval is defined as the clock on the
 Kremlin tower turning by a given angle.  Define one second of civil time
 as the hour hand turning by 30 seconds of arc.

No. It does not solve all problems. It introduces a whole bunch of new problems
of which I don't see any chances of swift implementations in many systems and
infact there would be quite large investments involved which I don't see with
other proposals.

Why?

Today we have a joint infrastructe for time distribution and in effect much of
todays *civil* time effectively hangs of GPS and do some degree transmissions
suchs as MSF, DCF77 etc. The later sources can be adjusted to rubber seconds,
but doing this to the GNSS world such as GPS would be quite problematic. The
NTP side of the world depends on various sources of UTC such as GPS, MSF,
DCF77 etc. For that not to break we need a real-time or near real-time solution
to compensate for that (using information comming from that source for
security reasons).

The longwave radio sources can easilly be adjusted at the source, but some of
their uses still lies in frequency comparision so due care would be needed for
those still using those sources for that purpose.

For GPS it would most probably require additional messages for a predicted
rubberization of the transmitted GPS time. This would also require the update
of all GPS receivers related to legal timing relating to be able to include the
correction.

Another way to go around it would be to adjust all sources (including GPS) to
actually transmitt rubber seconds. This would mean deeper modifications of the
GNSS satellites. I don't see that happening.

The last time we ran rubber seconds we could easilly device the transmitters of
time with the necessary hardware and run adjustments. The problems with
rubber seconds back then was that the detailed operations deviated from site to
site, so two transmitters would not necessarilly work. These days we have a
much more rigid system. We cannot choose arbitrarilly.

UTC is the civil time and in some countries the legal time. We need it to tick
in some reasnoble relation to things like the rise, noon and settlement of the
sun. Rubber days by means of leap seconds or leap minutes would work. There are
issues with these, mostly relating to the scheduling time obviously.

Not to say that there isn't problems with leap seconds. However, for the
civil use aspect, the DUTC  0.9s is not necessary. There is however a
requirement to keep DUTC below some limit, so just giving up on leap seconds
would be unacceptable in the long run. To avoid such things we have already
corrected the calender through the Julian and Gregorian corrections.

We do have TAI for usage of time intervals etc. Many systems uses TAI (or some
offset of TAI) and then leap second corrections for UTC reference. The leap
seconds can cause problems when jumping between TAI and UTC and which
particular problem one get depends on what the goal is (i.e. if a particular
UTC time is needed to a particular time difference). A scheduled leap second
may arrive into the decission process later than the initial decission was
taken in a system. The problem can be solved, but it complicate matters, but
they do not necessarilly become impossible.

There are problems relating to leap seconds. Interestingly enought, the
previous scheduling rules where limited by mail transfer to all affected
parties. With the new digital communication allowing us to transmitt messges
to any part of the globe within a few seconds, we seems to have a need for a
longer scheduling time. But sure, a longer scheduling time could work.

I think there is no real solution outside of the leap second (or possibly leap
minute) world. Rubber seconds has too many issues for it to be useable.
Abandoning leap second issuing doesn't fullfill the civil useage in long term.
We can work with the ways we predict and schedule leap seconds. The noise
process however prohibit us from setting up strict rules such as the Gregorian
date rules for leap days, those will actually have to be corrected too
eventually. So, for civil usage, I have yeat to hear a better proposal than
leap seconds. There is room for 

Re: A lurker surfaces

2007-01-01 Thread John Cowan
Ashley Yakeley scripsit:

 Rubber seconds are appropriate because we have rubber days. People
 who need absolute time have their own timescale based on some
 absolute unit (the SI second), but to everyone else, the second is
 a fraction of the day.

Well, okay.  Does the rubberiness go down all the way?  Is a civil
nanosecond one-billionth of a civil second, then?  If so, how do we
build clocks that measure these intervals?


--
One art / There is  John Cowan [EMAIL PROTECTED]
No less / No more   http://www.ccil.org/~cowan
All things / To do
With sparks / Galore -- Douglas Hofstadter


Re: A lurker surfaces

2007-01-01 Thread Steve Allen
On Tue 2007-01-02T01:48:26 -0500, John Cowan hath writ:
 Well, okay.  Does the rubberiness go down all the way?  Is a civil
 nanosecond one-billionth of a civil second, then?  If so, how do we
 build clocks that measure these intervals?

Let's not.

Let's continue the valid and agreeable notion of transmitting seconds
and frequencies based on a coordinate time scale tied to the ITRS at a
specified depth in the gravitational+rotational+tidal potential.  The
best practical implementation of such is undeniably the estimation
given by TAI.

Then let's improve the infrastructure for communicating the best
estimation of earth orientation parameters.  Then in a world of
ubiquitous computing anyone who wants to estimate the current
rubber-second-time is free to evaluate the splines or polynomials
(or whatever is used) and come up with output devices to display that.

And let's create an interface better than POSIX time_t which allows
those applications which need precise time to do a good job at it.

--
Steve Allen [EMAIL PROTECTED]WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m