Re: Introduction of long term scheduling

2007-01-15 Thread Tony Finch
On Mon, 15 Jan 2007, Peter Bunclark wrote:

  http://www.eecis.udel.edu/~mills/ipin.html

 That page does not seem to mention UTC...

Look at the slides.

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
BISCAY FITZROY: VARIABLE 4, BECOMING SOUTHWESTERLY 5 TO 7 IN NORTHWEST
FITZROY. MODERATE OR ROUGH, OCCASIONALLY VERY ROUGH. SHOWERS. GOOD.


Re: Introduction of long term scheduling

2007-01-12 Thread Tony Finch
On Mon, 8 Jan 2007, Steve Allen wrote:

 Don't forget that UTC and TAI are coordinate times which are difficult
 to define off the surface of the earth.  For chronometers outside of
 geostationary orbit the nonlinear deviations between the rate of a local
 oscillator and an earthbound clock climb into the realm of
 perceptibility. There seems little point in claiming to use a uniform
 time scale for a reference frame whose rate of proper time is notably
 variable from your own.

According to the slides linked from Dave Mills's Timekeeping in the
Interplanetary Internet page, they are planning to sync Mars time to UTC.
http://www.eecis.udel.edu/~mills/ipin.html

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
LUNDY FASTNET IRISH SEA: SOUTHWEST 6 TO GALE 8. ROUGH OR VERY ROUGH. RAIN OR
DRIZZLE. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-12 Thread Steve Allen
On Fri 2007-01-12T18:35:55 +, Tony Finch hath writ:
 According to the slides linked from Dave Mills's Timekeeping in the
 Interplanetary Internet page, they are planning to sync Mars time to UTC.
 http://www.eecis.udel.edu/~mills/ipin.html

Neverminding the variations on Mars with its rather more eccentric
orbit, the deviations from uniformity of rate of time on earth alone
create an annual variation of almost 2 ms between TT and TDB.  This is
also ignoring variations in time signal propagation through the solar
wind when Mars is near superior conjunction.

To some applications 2 ms in a year is nothing.  From an engineering
standpoint a variation of 2 ms in a year on Mars is certainly better
than any time scale that could be established there in lieu of landing
a cesium chronometer.  To other applications 2 ms in a year may be
intolerably large.

So the question remains: At what level do distributed systems need
access to a time scale which is uniform in their reference frame?
And my question: Can something as naive as POSIX time_t really serve
all such applications, even the ones on earth, for the next 600 years?

--
Steve Allen [EMAIL PROTECTED]WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-11 Thread Clive D.W. Feather
Rob Seaman said:
 Feather's encoding is a type of compression.  GZIP won't buy you
 anything extra.

Actually, it might with longer tables. For example, LZW (as used by Unix
compress) can be further compressed using a Huffman-based compressor.

 I'll join the rising chorus that thinks it need
 not appear in every packet.

Phew.

 I'd also modify Feather encoding to delta backwards from the
 expiration time stamp.

Interesting idea.

--
Clive D.W. Feather  | Work:  [EMAIL PROTECTED]   | Tel:+44 20 8495 6138
Internet Expert | Home:  [EMAIL PROTECTED]  | Fax:+44 870 051 9937
Demon Internet  | WWW: http://www.davros.org | Mobile: +44 7973 377646
THUS plc||


Re: Introduction of long term scheduling

2007-01-09 Thread Zefram
Steve Allen wrote:
But it is probably safer to come up
with a name for the timescale my system clock keeps that I wish were
TAI but I know it really is not.

True.  I can record timestamps in TAI(bowl.fysh.org), and by logging
all its NTP activity I could retrospectively do a more precise
TAI(bowl.fysh.org)-TAI conversion than was possible in real time.
To be rigorous we need to reify an awful lot more timescales than we
do currently.

Another aspect of rigour that I'd like to see is uncertainty bounds
on timestamps.  With NTP, as things stand now, the system clock does
carry an error bound, which can be extracted using ntp_adjtime().
(Btw, another nastiness of the ntp_*() interface is that ntp_adjtime()
doesn't return the current clock reading on all systems.  On affected
OSes it is impossible to atomically acquire a clock reading together
with error bounds.)  If I want a one-off TAI reading in real time, I can
take the TAI(bowl.fysh.org) reading along with the error bound, and then
instead of claiming an exact TAI instant I merely claim that the true
TAI time is within the identified range.  In that sense it *is* possible
to get true TAI in real time, just not with the highest precision.

If I have a series of timestamps from the same machine then for comparing
them I don't want individual error bounds on them.  The ranges would
overlap and I'd be unable to sort them properly.  This is another reason
to reify TAI(bowl.fysh.org): the errors in the TAI readings are highly
correlated, and to know that I can sort the timestamps naively I need
to know that correlation, namely that they came from the same clock.
Even in retrospect, when I can do more precise coversions to true TAI, I
need to maintain the correlation, because the intervals between timestamps
may still be smaller than the uncertainty with which I convert to TAI.

(or at least it is if you are one of Tom Van Baak's kids.  See
http://www.leapsecond.com/great2005/ )

Cool.  I'd have loved such toys when I was that age.  My equivalent was
that I got to experiment with a HeNe laser, as my father is a physicist.
Now I carry a diode laser in my pocket.  When TVB's children grow up,
they'll probably carry atomic watches.

There seems little point in claiming to use a uniform time scale for a
reference frame whose rate of proper time is notably variable from
your own.

Hmm.  Seems to me there's use in it if you do a lot of work relating to
that reference frame or if you exchange timestamps with other parties
who use that reference frame.  Just need to keep it in its conceptual
place: don't assume that it's a suitable timescale for measuring local
interval time.  Another reason to reify a local timescale.

   what happens when the operations of distributed systems demand
an even tighter level of sync than NTP can provide?

Putting on my futurist hat, I predict the migration of time
synchronisation into the network hardware.  Routers at each end of a
fibre-optic cable could do pretty damn tight synchronisation at the
data-link layer, aided by the strong knowledge that the link is the
same length in both directions.  Do this hop by hop to achieve networked
Einstein synchronisation.  (And here come another few thousand timescales
for us to process.)

What if general purpose systems do not have a means of acknowledging
and dealing with the fact that their system chronometer has deviated
from the agreeable external time,

This has long been the case.  Pre-NTP Unix APIs have no way to admit
that the clock reading is bogus, and systems like Windows still have no
concept of clock accuracy.  What happens is that we get duff timestamps,
and some applications go wrong.  The number of visible faults that result
from this is surprisingly small, so far.

-zefram


Re: Introduction of long term scheduling

2007-01-09 Thread matsakis . demetrios
As many have pointed out on this forum, these various timescales do have
very specific meanings which often fade at levels coarser than a few
nanoseconds (modulo 1 second), and which at times are misapplied at the
1-second and higher level.

GPS Time is technically an implicit ensemble mean.  You can say it exists
inside the Kalman Filter at the GPS Master Control Station as a linear
combination of corrected clock states.  But there is no need for the control
computer to actually compute it as a specific number, and that's why it is
implicit.  Every GPS clock is a realization of GPS Time once the receiver
applies the broadcast corrections.   GPS Time is steered to UTC(USNO), and
generally stays within a few nanoseconds of it, modulo 1 second.  UTC(USNO)
approximates UTC, and so it goes.

The most beautiful reference to GPS Time is The Theory of the GPS Composite
Clock by Brown, in the Proceedings of the Institute of Navigation's 1991
ION-GPS meeting.  But others, including me, routinely publish plots of it.

--Original Message-
From: Leap Seconds Issues [mailto:[EMAIL PROTECTED] On Behalf Of
Ashley Yakeley
Sent: Tuesday, January 09, 2007 2:22 AM
To: LEAPSECS@ROM.USNO.NAVY.MIL
Subject: Re: [LEAPSECS] Introduction of long term scheduling

On Jan 8, 2007, at 22:57, Steve Allen wrote:

 GPS is not (TAI - 19)

What is GPS time, anyway? I had assumed someone had simply defined GPS to be
TAI - 19, and made the goal of the satellites to approximate GPS time, i.e.
that GPS and TAI are the same (up to isomorphism in some category of
measurements). But apparently not?
Are the satellite clocks allowed to drift, or do they get corrected?

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-08 Thread Clive D.W. Feather
Rob Seaman said:
 Which raises the question of how concisely one can express a leap
 second table.

Firstly, I agree with Steve when he asks why bother?. You're solving the
wrong problem.

However, having said that:

 So, let's see - assume:
1) all 20th century leap seconds can be statically linked
2) start counting months at 2000-01-31
 We're seeing about 7 leapseconds per decade on average, round up to
 10 to allow for a few decades worth of quadratic acceleration (less
 important for the next couple of centuries than geophysical noise).
 So 100 short integers should suffice for the next century and a
 kilobyte likely for the next 500 years.  Add one short for the
 expiration date, and a zero short word for an end of record stopper
 and distribute it as a variable length record - quite terse for the
 next few decades.  The current table would be six bytes (suggest
 network byte order):

0042 003C 

That's far too verbose a format.

Firstly, once you've seen the value 003C, you know all subsequent values
will be greater. So why not delta encode them (i.e. each entry is the
number of months since the previous leap second)? If you assume that leap
seconds will be no more than 255 months apart, then you only need one byte
per leap second. But you don't even need that assumption: a value of 255
can mean 255 months without a leap second (I'm assuming we're reserving 0
for end-of-list).

But we can do better. At present leap seconds come at 6 month boundaries.
So let's encode using 4 bit codons:

* Start with the unit size being 6 months.
* A codon of 1 to 15 means the next leap second is N units after the
  previous one.
* A codon of 0 is followed by a second codon:
  - 1, 3, 6, or 12 sets the unit size;
  - 0 means the next item is the expiry date, after which the list ends
  (this assumes the expiry is after the last leap second; I wasn't
  clear if you expect that always to be the case);
  - 15 means 15 units without a leap second;
  - other values are reserved for future expansion.

So the present table is A001. Two bytes instead of six.

If we used 1980 as the base instead of 2000, the table would be:

3224 5423 2233 3E00 1x

where the last byte can have any value for the last 4 bits.

I'm sure that some real thought could compress the data even more; based on
leap second history, 3 byte codons would probably be better than 4.

--
Clive D.W. Feather  | Work:  [EMAIL PROTECTED]   | Tel:+44 20 8495 6138
Internet Expert | Home:  [EMAIL PROTECTED]  | Fax:+44 870 051 9937
Demon Internet  | WWW: http://www.davros.org | Mobile: +44 7973 377646
THUS plc||


Re: Introduction of long term scheduling

2007-01-08 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Zefram writes:
Clive D.W. Feather wrote:
Firstly, I agree with Steve when he asks why bother?. You're solving the
wrong problem.

Conciseness is useful for network protocols.

On the other hand, one should not forget that the OSI protocols was
killed by conciseness to the point of obscurity.

And next thing, somebody is going to argue for GZIP encoding of the
list, and next thing you know, all programs need to drag libz in
to uncompress their leap-second table.

The major part of the InterNets success was that you could telnet
to pratically all servers, FTP, SMTP, NNTP etc, and you could see
what went on without a protocol analyzer with a price-tag of $CALL

the limiting factor: CPU speed and bulk storage sizes have been
increasing faster.  An NTPv3 packet is only 48 octets of UDP payload;
if a leap second table is to be disseminated in the same packets then
we really do want to think about the format nybble by nybble.

No we don't.

We certainly don't want to transmit the leap-second table with every
single NTP packet, because, as a result, we would need to examine
it every time to see if something changed.

Furthermore, you will not getaround a strong signature on the
leap-second table, because if anyone can inject a leap-second table
on the internet, there is no end to how much fun they could have.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-08 Thread Zefram
Poul-Henning Kamp wrote:
We certainly don't want to transmit the leap-second table with every
single NTP packet, because, as a result, we would need to examine
it every time to see if something changed.

Once we've got an up-to-date table, barring faults, we only need to check
to see whether the table has been extended further into the future.
If we put the expiry date first in the packet then that'll usually be
just a couple of machine instructions to know that there's no new data.

If an erroneous table is distributed, we want to pick up corrections
eventually, but we don't have to check every packet for that.  Not that
it would be awfully expensive to do so, anyway.

Furthermore, you will not getaround a strong signature on the
leap-second table, because if anyone can inject a leap-second table
on the internet, there is no end to how much fun they could have.

This issue applies generally with time synchronisation, does it not?
NTP has authentication mechanisms.

-zefram


Re: Introduction of long term scheduling

2007-01-08 Thread Tony Finch
On Mon, 8 Jan 2007, Zefram wrote:

 Possibly TT could also be used in some form, for interval calculations
 in the pre-caesium age.

In that case you'd need a model (probably involving rubber seconds) of the
TT-UT translation. It doesn't seem worth doing to me because of the
small number of applications that care about that level of precision that
far in the past.

The main requirement for a proleptic timescale is that it is useful for
most practical purposes. Therefore it should not be excessively
complicated, such as requiring a substantially different implementation of
time in the past to time in the present. What we actually did in the past
was make a smooth(ish) transition from universal time to atomic time, so
it would seem reasonable to implement (a simplified version of) that in
our systems. In practice this means saying that we couldn't tell the
difference between universal time and uniform time before a certain date,
which we model as a leap second offset of zero.

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
BAILEY: SOUTHWEST 5 TO 7 BECOMING VARIABLE 4. ROUGH OR VERY ROUGH. SHOWERS,
RAIN LATER. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-08 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Tony Finch [EMAIL PROTECTED] writes:
: On Sat, 6 Jan 2007, M. Warner Losh wrote:
: 
:  Unfortunately, the kernel has to have a notion of time stepping around
:  a leap-second if it implements ntp.
:
: Surely ntpd could be altered to isolate the kernel from ntp's broken
: timescale (assuming the kernel has an atomic seconds count timescale)

ntpd is the one that mandates it.

One could use an atomic scale in the kernel, but nobody that I'm aware
of does.

Warner


Re: Introduction of long term scheduling

2007-01-07 Thread Rob Seaman

Warner Losh wrote:


Actually, every IP does not have a 1's complement checksum.  Sure,
there is a trivial one that covers the 20 bytes of header, but that's
it.  Most hardware these days off loads checksumming to the hardware
anyway to increase the throughput.  Maybe you are thinking of TCP or
UDP :-).  Often, the packets are copied and therefore in the cache, so
the addition operations are very cheap.


Ok.  I simplified.  There are several layers of checksums.  I
designed an ASCII encoded checksum for the astronomical FITS format
and should not have been so sloppy.  They do it in hardware could
be taken as an argument for how time should be handled, as well.


Adding or subtracting two of them is relatively easy.


Duly stipulated, your honor.


Converting to a broken down format or doing math
with the complicated forms is much more code intensive.


And should the kernel be expected to handle complicated forms of
any data structure?


Dealing with broken down forms, and all the special cases usually
involves
multiplcation and division, when tend to be more computationally
expensive than the checksum.


Indeed.  May well be.  I would suggest that the natural scope of this
discussion is the intrinsic requirements placed on the kernel, just
as it should be the intrinsic requirements of the properly traceable
distribution and appropriate usage of time-of-day and interval
times.  Current kernels (and other compute layers, services and
facilities) don't appear to implement a coherent model of
timekeeping.  Deprecating leap seconds is not a strategy for make the
model more coherent, rather, just the timekeeping equivalent of
Lysenkoism.


Having actually participated in the benchmarks that showed the effects
of inefficient timekeeping, I can say that they have a measurable
effect.  I'll try to find references that the benchmarks generated.


With zero irony intended, that would be thoroughly refreshing.


If by some limp attempt you mean returns the correct time then you
are correct.


It's not the correct time under the current standard if the
timekeeping model doesn't implement leap seconds correctly.  I don't
think this is an impossible expectation, see http://
www.eecis.udel.edu/~mills/exec.html, starting with the Levine and
Mills PTTI paper.


You'd think that, but you have to test to see if something was
pending.  And the code actually does that.


Does such testing involve the complex arithmetic you describe above?
(Not a rhetorical question.)  The kernel does a heck of a lot of
conditional comparisons every second.


Did I say anything about eviscerating mean solar time?


Well, these side discussions get a little messy.  The leap second
assassins haven't made any particular fuss about kernel computing
issues, either, just previous and next generation global positioning
and certain spread spectrum applications and the inchoate fear of
airplanes falling from the sky.

The probability of the latter occurring seems likely to increase a
few years after leap seconds are finally eradicated - after all,
airplanes follow great circles and might actually care to know the
orientation of the planet.  Hopefully, should such a change occur
courtesy of WP7A, all pilots, all airlines and all air traffic
control centers will get the memo and not make any sign errors in
implementing contingent patches.  It's the height of hubris to simply
assume all the problems vanish with those dastardly leap seconds.  (I
don't suppose the kernel currently has to perform spherical trig?)

Note that the noisy astronomer types on this list are all also
software types, we won't reject computing issues out of hand.


I'm just suggesting that some of the suggested ideas have real
performance issues that means they wouldn't even be considered as
viable options.


Real performance issues will be compelling evidence to all parties.
Real performance issues can be described with real data.


True, but timekeeping is one of those areas of the kernel that extra
overhead is called so many times that making it more complex hurts a
lot more than you'd naively think.


Either the overhead in question is intrinsic to the reality of
timekeeping - or it is not.  In the latter case, one might expect
that we could all agree that the kernel(s) in question are at fault,
or that POSIX is at fault.  I have little sympathy for the suggestion
that having established that POSIX or vendors are at fault that we
let them get away with it anyway.  Rather, workaround any limitations
in the mean time and redesign properly for the future.

If, however, the overhead is simply the cost of doing timekeeping
right, then I submit that it is better to do timekeeping right than
to do it wrong.  Doing it right certainly may involve appropriate
approximations.  Destroying mean solar time based civil time-of-day
is not appropriate.

Of course, we have yet to establish the extent of any problem with
such overhead.  It sounds like you have expertise in this area.
Assemble your 

Re: Introduction of long term scheduling

2007-01-07 Thread David Malone
 So you think it is appropriate to demand that ever computer with a
 clock should suffer biannual software upgrades if it is not connected
 to a network where it can get NTP or similar service ?

 I know people who will disagree with you:

 Air traffic control
 Train control
 Hospitals

 and the list goes on.

FWIW, I believe most hospitals are more than capable of looking
after equipment with complex maintenance schedules. They have
endoscopes, blood gas analysers, gamma cameras, MRI machines,
dialysis machines and a rake of other stuff that has a schedule
requiring attention more regurally than once every 6 months.

I am not sure how much un-networked equipment that requires UTC to
1 second and doesn't already have a suitable maintenance schedule
exists in hospitals.

David.


Re: Introduction of long term scheduling

2007-01-07 Thread Tony Finch
On Sat, 6 Jan 2007, M. Warner Losh wrote:

 Most filesystems store time as UTC anyway...

POSIX time is not UTC :-)

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
SOUTHEAST ICELAND: CYCLONIC 6 TO GALE 8, DECREASING 5 OR 6 LATER. ROUGH OR
VERY ROUGH. OCCASIONAL RAIN OR WINTRY SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-07 Thread Tony Finch
On Sun, 7 Jan 2007, Rob Seaman wrote:

 It's not the correct time under the current standard if the
 timekeeping model doesn't implement leap seconds correctly.  I don't
 think this is an impossible expectation, see http://
 www.eecis.udel.edu/~mills/exec.html, starting with the Levine and
 Mills PTTI paper.

As http://www.eecis.udel.edu/~mills/leap.html shows, NTP (with kernel
support) is designed to stop the clock over the leap second, which I
don't call correct. Without kernel support it behaves like a pinball
machine (according to Mills).

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
SOUTHEAST ICELAND: CYCLONIC 6 TO GALE 8, BECOMING VARIABLE 4 FOR A TIME. ROUGH
OR VERY ROUGH. OCCASIONAL RAIN OR WINTRY SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-07 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], David Malone writes:

FWIW, I believe most hospitals are more than capable of looking
after equipment with complex maintenance schedules.

It is not just a questoin of ability, to a very high degree
cost is much more important.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-07 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Tony Finch [EMAIL PROTECTED] writes:
: On Sat, 6 Jan 2007, M. Warner Losh wrote:
: 
:  Most filesystems store time as UTC anyway...
:
: POSIX time is not UTC :-)

True.  It is designed to be UTC, but fails to properly implement UTC's
leap seconds and intervals around leapseconds.

Warner


Re: Introduction of long term scheduling

2007-01-07 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Rob Seaman [EMAIL PROTECTED] writes:
:  If by some limp attempt you mean returns the correct time then you
:  are correct.
:
: It's not the correct time under the current standard if the
: timekeeping model doesn't implement leap seconds correctly.  I don't
: think this is an impossible expectation, see http://
: www.eecis.udel.edu/~mills/exec.html, starting with the Levine and
: Mills PTTI paper.

It implements exactly what ntpd wants.  I asked Judah Levine when
determining what was pedantically correct during the leap second.  I
also consulted with the many different resources avaialable to
deterimine what the right thing is.  Of course, there are different
explaintions about what the leap second should look like depending on
if you listen to Dr. Levine or Dr Mills.  Dr. Mills web site says
'redo the first second of the next day' while Dr. Levine's
leapsecond.dat file says 'repeat the last second of the day.'
Actually, both of them hedge and say 'most systems implement...'  or
some variation on that theme.

It is possible to determine when you are in a leap second using ntp
extensions with their model.  Just not with POSIX interfaces.  The
POSIX interfaces are kludged, while the ntpd ones give you enough info
to know to print :59 or :60, but POSIX time_t is insufficiently
expressive, by itself, to know.  But ntp_gettime returns a timespec
for the time, as well as a time_state for the current time status,
which includes TIME_INS and TIME_DEL for psotive and negative leap
second 'warning' for end of the day so you know there will be a leap
today, and TIME_WAIT for the actual positive leap second itself
(there's nothing for a negative leapsecond, obviously).

So I stand by my returns the correct time statement.

Warner


Re: Introduction of long term scheduling

2007-01-07 Thread Tony Finch
On Sun, 7 Jan 2007, M. Warner Losh wrote:

 [POSIX time] is designed to be UTC, but fails to properly implement
 UTC's leap seconds and intervals around leapseconds.

From the historical point of view I'd say that UNIX time was originally
designed to be some vague form of UT, and the POSIX committee retro-fitted
a weak form of UTC synchronization.

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
DOGGER FISHER GERMAN BIGHT HUMBER: SOUTHWEST, VEERING NORTHWEST FOR A TIME, 6
TO GALE 8, OCCASIONALLY SEVERE GALE 9 IN DOGGER. ROUGH OR VERY ROUGH. RAIN OR
SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-07 Thread Daniel R. Tobias
On 8 Jan 2007 at 0:15, Tony Finch wrote:

 How did you extend the UTC translation back past 1972 if the undelying
 clock followed TAI? I assume that beyond some point in the past you say
 that the clock times are a representation of UT. However TAI matched UT in
 1958 and between then and 1972 you somehow have to deal with a 10s offset.

Formulas for UTC, as actually defined at the time, go back to 1961
here:

ftp://maia.usno.navy.mil/ser7/tai-utc.dat

It appears the offset was 1.4228180 seconds at the start of this.

--
== Dan ==
Dan's Mail Format Site: http://mailformat.dan.info/
Dan's Web Tips: http://webtips.dan.info/
Dan's Domain Site: http://domains.dan.info/


Re: Introduction of long term scheduling

2007-01-06 Thread Tony Finch
On Sat, 6 Jan 2007, M. Warner Losh wrote:

 OSes usually deal with timestamps all the time for various things.  To
 find out how much CPU to bill a process, to more mondane things.
 Having to do all these gymnastics is going to hurt performance.

That's why leap second handling should be done in userland as part of the
conversion from clock (scalar) time to civil (broken-down) time.

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
SOUTHEAST ICELAND: SOUTHWEST BECOMING CYCLONIC 5 TO 7, PERHAPS GALE 8 LATER.
ROUGH TO HIGH. SQUALLY SHOWERS. MAINLY GOOD.


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Tony Finch [EMAIL PROTECTED] writes:
: On Sat, 6 Jan 2007, M. Warner Losh wrote:
: 
:  OSes usually deal with timestamps all the time for various things.  To
:  find out how much CPU to bill a process, to more mondane things.
:  Having to do all these gymnastics is going to hurt performance.
:
: That's why leap second handling should be done in userland as part of the
: conversion from clock (scalar) time to civil (broken-down) time.

Right.  And that's what makes things hard because the kernel time
clock needs to be monotonic, and leap seconds break that rule if one
does things in UTC such that the naive math just works (aka POSIX
time_t).  Some systems punt on keeping posix time internally, but have
complications for getting leapseconds right for times they return to
userland

Warner


Re: Introduction of long term scheduling

2007-01-06 Thread Rob Seaman

Warner Losh wrote:


leap seconds break that rule if one does things in UTC such that
the naive math just works


All civil timekeeping, and most precision timekeeping, requires only
pretty naive math.  Whatever the problem is - or is not - with leap
seconds, it isn't the arithmetic involved.  Take a look a [EMAIL PROTECTED]
and other BOINC projects.  Modern computers have firepower to burn in
fluff like live 3-D screensavers.  POSIX time handling just sucks for
no good reason.  Other system interfaces successfully implement
significantly more stringent facilities.

Expecting to be able to naively subtract timestamps to compute an
accurate interval reminds me of expecting to be able to naively stuff
pointers into integer datatypes and have nothing ever go wrong.  A
strongly typed language might even overload the subtraction of UTC
typed variables with the correct time-of-day to interval
calculations.  But then, what should one expect the subtraction of
Earth orientation values to return but some sort of angle, not an
interval?

Rob


Re: Introduction of long term scheduling

2007-01-06 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Tony Fin
ch writes:
On Sat, 6 Jan 2007, M. Warner Losh wrote:

 OSes usually deal with timestamps all the time for various things.  To
 find out how much CPU to bill a process, to more mondane things.
 Having to do all these gymnastics is going to hurt performance.

That's why leap second handling should be done in userland as part of the
conversion from clock (scalar) time to civil (broken-down) time.

I would agree with you in theory, but badly designed filesystems
like FAT store timestamps in encoded YMDHMS format, so the kernel
need to know the trick as well. (There are other examples, but not
as well known).

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Rob Seaman [EMAIL PROTECTED] writes:
: Warner Losh wrote:
:
:  leap seconds break that rule if one does things in UTC such that
:  the naive math just works
:
: All civil timekeeping, and most precision timekeeping, requires only
: pretty naive math.  Whatever the problem is - or is not - with leap
: seconds, it isn't the arithmetic involved.  Take a look a [EMAIL PROTECTED]
: and other BOINC projects.  Modern computers have firepower to burn in
: fluff like live 3-D screensavers.  POSIX time handling just sucks for
: no good reason.  Other system interfaces successfully implement
: significantly more stringent facilities.

But modern servers and routers don't.  Anything that makes the math
harder (more computationally expensive) can have huge effects on
performance in these areas.  That's because the math is done so often
that any little change causes big headaches.

: Expecting to be able to naively subtract timestamps to compute an
: accurate interval reminds me of expecting to be able to naively stuff
: pointers into integer datatypes and have nothing ever go wrong.

Well, the kernel doesn't expect to be able to do that.  Internally,
all the FreeBSD kernel does is time based on a monotonically
increasing second count since boot.  When time is returned, it is
adjusted to the right wall time.  The kernel only worries about leap
seconds when time is incremented, since the ntpd portion in the kernel
needs to return special things during the leap second.  If there were
no leapseconds, then even that computation could be eliminated.  One
might think that one could 'defer' this work to gettimeofday and
friends, but that turns out to not be possible (or at least it is much
more inefficient to do it there).

Since the interface to the kernel is time_t, there's really no chance
for the kernel to do anything smarter with leapseconds.  gettimeofday,
time and clock_gettime all return a time_t in different flavors.

In short, you are taking things out of context and drawing the wrong
conclusion about what is done.  It is these complications, which I've
had to deal with over the past 7 years, that have lead me to the
understanding of the complications.  Espeically the 'non-uniform radix
crap' that's in UTC.  It really does complicate things in a number of
places that you wouldn't think.  To dimissively suggest it is only a
problem when subtracting two numbers to get an interval time is to
completely misunderstand the complications that leapseconds introduce
into systems and the unexpected places where they pop up.  Really, it
is a lot more complicated than just the 'simple' case you've latched
onto.

: A
: strongly typed language might even overload the subtraction of UTC
: typed variables with the correct time-of-day to interval
: calculations.

Kernels aren't written in these languages.  To base one's arugments
about what the right type for time is that is predicated on these
langauges is a non-starter.

: But then, what should one expect the subtraction of
: Earth orientation values to return but some sort of angle, not an
: interval?

These are a specialized thing that kernels don't care about.

Warner


Re: Introduction of long term scheduling

2007-01-06 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Rob Seaman writes:
Warner Losh wrote:

 leap seconds break that rule if one does things in UTC such that
 the naive math just works

POSIX time handling just sucks for no good reason.

I've said it before, and I'll say it again:

There are two problems:

1. We get too short notice about leap-seconds.

2. POSIX and other standards cannot invent their UTC timescales.

These two problems can be solved according to two plans:

A. Abolish leap seconds.

B. i) Issue leapseconds with at least twenty times longer notice.
   ii) Ammend POSIX and/or ISO-C
   iii) Ammend NTP
   iv) Ammend NTP
   v) Convince all operating system to adobt the new API
   vi) Fix all the bugs in their implementations
   vii) Fix up all the relevant application code
   viii) Fix all tacit the assumptions about time_t.

I will fully agree, that while taking the much easier approach of
plan A, will vindicate the potheads who wrote the time_t definition,
and thus deprive us of a very satisfactory intelectual reward of
striking their handiwork from the standards, it would cost only a
fraction of plan B.


Poul-Henning

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread Steve Allen
On Sat 2007-01-06T19:36:19 +, Poul-Henning Kamp hath writ:
 There are two problems:

 1. We get too short notice about leap-seconds.

 2. POSIX and other standards cannot invent their UTC timescales.

This is not fair, for there is a more fundamental problem:

No two clocks can ever stay in agreement.

And the question that POSIX time_t does not answer is:

What do you want to do about that?

In some applications, especially the one for which it was designed,
there is nothing wrong with POSIX time_t.  POSIX is just fine to
describe a clock which is manually reset as necessary to stay within
tolerance.

There are now other applications.
For some of those POSIX cannot do the job -- with or without leap seconds.

Yes, there is a cost of doing time right, and leap seconds are not to
blame for that cost.  They are a wake up call from the state of denial.

--
Steve Allen [EMAIL PROTECTED]WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-06 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Steve Allen writes:
On Sat 2007-01-06T19:36:19 +, Poul-Henning Kamp hath writ:
 There are two problems:

 1. We get too short notice about leap-seconds.

 2. POSIX and other standards cannot invent their UTC timescales.

This is not fair, for there is a more fundamental problem:

Yes, this is perfectly fair, this is all the problems there are.

And furthermore, the two plans I outlined represent the only
two kinds of plans there are for solving this.

They can be varied for various sundry and unsundry purposes, such
as the leap-hour fig-leaf and similar, but there are only
two classes of solutions.

No two clocks can ever stay in agreement.

This is not relevant.  It's not a matter of clock precision or
clock stability.  It's only a matter of how they count.

Yes, there is a cost of doing time right, and leap seconds are not to
blame for that cost.  They are a wake up call from the state of denial.

Now, it can be equally argued, that leap seconds implement a state
of denial with respect to a particular lump of rocks ability as
timekeeper, so I suggest we keep that part of the discussion closed
for now.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread Ashley Yakeley

On Jan 6, 2007, at 11:36, Poul-Henning Kamp wrote:


B. i) Issue leapseconds with at least twenty times longer
notice.


This plan might not be so good from a software engineering point of
view. Inevitably software authors would hard-code the known table,
and then the software would fail ten years later with the first
unexpected leap second.

At least with the present system, programmers are (more) forced to
face the reality of the unpredictability of the time-scale.

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-06 Thread Tony Finch
On Sat, 6 Jan 2007, Steve Allen wrote:

 No two clocks can ever stay in agreement.

I don't think that statement is useful. Most people have a concept of
accuracy within certain tolerances, dependent on the quality of the clock
and its discipline mechanisms. For most purposes a computer's clock can be
kept correct with more than enough accuracy, and certainly enough accuracy
that leap seconds are noticeable.

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
HEBRIDES BAILEY FAIR ISLE FAEROES: SOUTHWEST 6 TO GALE 8. VERY ROUGH OR HIGH.
RAIN OR SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-06 Thread Ashley Yakeley

On Jan 6, 2007, at 13:47, Poul-Henning Kamp wrote:


In message [EMAIL PROTECTED],
Ashley Yakeley
writes:

On Jan 6, 2007, at 11:36, Poul-Henning Kamp wrote:


B. i) Issue leapseconds with at least twenty times longer
notice.


This plan might not be so good from a software engineering point of
view. Inevitably software authors would hard-code the known table,
and then the software would fail ten years later with the first
unexpected leap second.


Ten years later is a heck of a log more acceptable than 7 months
later.


Not necessarily. After seven months, or even after two years, there's
a better chance that the product is still in active maintenance.
Better to find that particular bug early, if someone's been so
foolish as to hard-code a leap-second table. The bug here, by the
way, is not that one particular leap second table is wrong. It's the
assumption that any fixed table can ever be correct.

If you were to make that assumption in your code, then your product
would be defective if it's ever used ten years from now (under your
plan B). Programs in general tend to be used for awhile. Is any of
your software from 1996 or before still in use? I should hope so.

Under the present system, however, it's a lot more obvious that a
hard-coded leap second table is a bad idea.

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-06 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Ashley Yakeley
writes:

Not necessarily. After seven months, or even after two years, there's
a better chance that the product is still in active maintenance.
Better to find that particular bug early, if someone's been so
foolish as to hard-code a leap-second table. The bug here, by the
way, is not that one particular leap second table is wrong. It's the
assumption that any fixed table can ever be correct.

So you think it is appropriate to demand that ever computer with a
clock should suffer biannual software upgrades if it is not connected
to a network where it can get NTP or similar service ?

I know people who will disagree with you:

Air traffic control
Train control
Hospitals

and the list goes on.

6 months is simply not an acceptable warning to get, end of story.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread Ashley Yakeley

On Jan 6, 2007, at 14:43, Poul-Henning Kamp wrote:


So you think it is appropriate to demand that ever computer with a
clock should suffer biannual software upgrades if it is not connected
to a network where it can get NTP or similar service ?


Since that's the consequence of hard-coding a leap-second table,
that's exactly what I'm not proposing. Instead, they should suffer
biannual updates to their leap-second table. Doing this is an
engineering problem, but a known one.

Under your plan B, however, we'd have plenty of software that just
wouldn't get upgraded at all, but would simply fail after ten years.
That strikes me as worse.


I know people who will disagree with you:


I don't think you're serious.


Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe


Don't forget  | one second off since 2018. :-)

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Ashley Yakeley [EMAIL PROTECTED] writes:
: On Jan 6, 2007, at 08:35, M. Warner Losh wrote:
:
:  So for the foreseeable future,
:  timestamps in OSes will be a count of seconds and a fractional second
:  part.  That's not going to change anytime soon even with faster
:  machines, more memory, etc.  Too many transaction processing
:  applications demand maximum speed.
:
: That's sensible for a simple timestamp, but trying to squeeze in a
: leap-second table probably isn't such a good idea.

Unfortunately, the kernel has to have a notion of time stepping around
a leap-second if it implements ntp.  There's no way around that that
isn't horribly expensive or difficult to code.  The reasons for the
kernel's need to know have been enumerated elsewhere...

Warner


Re: Introduction of long term scheduling

2007-01-06 Thread Ashley Yakeley

On Jan 6, 2007, at 16:18, M. Warner Losh wrote:


Unfortunately, the kernel has to have a notion of time stepping around
a leap-second if it implements ntp.  There's no way around that that
isn't horribly expensive or difficult to code.  The reasons for the
kernel's need to know have been enumerated elsewhere...


Presumably it only needs to know the next leap-second to do this, not
the whole known table?

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Ashley Yakeley [EMAIL PROTECTED] writes:
: On Jan 6, 2007, at 16:18, M. Warner Losh wrote:
:
:  Unfortunately, the kernel has to have a notion of time stepping around
:  a leap-second if it implements ntp.  There's no way around that that
:  isn't horribly expensive or difficult to code.  The reasons for the
:  kernel's need to know have been enumerated elsewhere...
:
: Presumably it only needs to know the next leap-second to do this, not
: the whole known table?

Yes.  ntpd or another agent tells it when leap seconds are coming.  It
doesn't need a table.  Then again, none of the broadcast time services
provide a table...

Warner


Re: Introduction of long term scheduling

2007-01-06 Thread Rob Seaman

Warner Losh wrote:


Anything that makes the math
harder (more computationally expensive) can have huge effects on
performance in these areas.  That's because the math is done so often
that any little change causes big headaches.


Every IP packet has a 1's complement checksum.  (That not all
switches handle these properly is a different issue.)  Calculating a
checksum is about as expensive (or more so) than subtracting
timestamps the right way.  I have a hard time believing that epoch-
interval conversions have to be performed more often than IP
packets are assembled.  One imagines (would love to be pointed to
actual literature regarding such issues) that most computer time
handling devolves to requirements for relative intervals and epochs,
not to stepping outside to any external clock at all.  Certainly the
hardware clocking of signals is an issue entirely separate from what
we've been discussing as timekeeping and traceability.  (And note
that astronomers face much more rigorous requirements in a number of
ways when clocking out their CCDs.)


Well, the kernel doesn't expect to be able to do that.  Internally,
all the FreeBSD kernel does is time based on a monotonically
increasing second count since boot.  When time is returned, it is
adjusted to the right wall time.


Well, no - the point is that only some limp attempt is made to adjust
to the right time.


The kernel only worries about leap
seconds when time is incremented, since the ntpd portion in the kernel
needs to return special things during the leap second.  If there were
no leapseconds, then even that computation could be eliminated.  One
might think that one could 'defer' this work to gettimeofday and
friends, but that turns out to not be possible (or at least it is much
more inefficient to do it there).


One might imagine that an interface could be devised that would only
carry the burden for a leap second when a leap second is actually
pending.  Then it could be handled like any other rare phenomenon
that has to be dealt with correctly - like context switching or
swapping.


Really, it is a lot more complicated than just the 'simple' case
you've latched onto.


Ditto for Earth orientation and its relation to civil timekeeping.
I'm happy to admit that getting it right at the CPU level is
complex.  Shouldn't we be focusing on that, rather than on
eviscerating mean solar time?  In general, either side here would
have a better chance of convincing the other if actual proposals,
planning, research, requirements, and so forth, were discussed.  The
only proposal on the table - and the only one I spend every single
message trying to shoot down - is the absolutely ridiculous leap hour
proposal.  We're not defending leap seconds per se - we're defending
mean solar time.

A proposal to actually address the intrinsic complications of
timekeeping is more likely to be received warmly than is a kludge or
partial workaround.  I suspect it would be a lot more fun, too.


Kernels aren't written in these languages.  To base one's arugments
about what the right type for time is that is predicated on these
langauges is a non-starter.


No, but the kernels can implement support for these types and the
applications can code to them in whatever language.  Again - there is
a hell of a lot more complicated stuff going on under the hood than
what would be required to implement a proper model of timekeeping.

Rob


Re: Introduction of long term scheduling

2007-01-06 Thread Tony Finch
On Sat, 6 Jan 2007, Ashley Yakeley wrote:

 Presumably it only needs to know the next leap-second to do this, not
 the whole known table?

Kernels sometimes need to deal with historical timestamps (principally
from the filesystem) so it'll need a full table to be able to convert
between POSIX time and atomic time for compatibility purposes.

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
SHANNON ROCKALL MALIN: MAINLY WEST OR SOUTHWEST 6 TO GALE 8, OCCASIONALLY
SEVERE GALE 9. VERY ROUGH OR HIGH. RAIN OR SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Rob Seaman [EMAIL PROTECTED] writes:
: Warner Losh wrote:
:  Anything that makes the math
:  harder (more computationally expensive) can have huge effects on
:  performance in these areas.  That's because the math is done so often
:  that any little change causes big headaches.
:
: Every IP packet has a 1's complement checksum.  (That not all
: switches handle these properly is a different issue.)

Actually, every IP does not have a 1's complement checksum.  Sure,
there is a trivial one that covers the 20 bytes of header, but that's
it.  Most hardware these days off loads checksumming to the hardware
anyway to increase the throughput.  Maybe you are thinking of TCP or
UDP :-).  Often, the packets are copied and therefore in the cache, so
the addition operations are very cheap.

: Calculating a
: checksum is about as expensive (or more so) than subtracting
: timestamps the right way.  I have a hard time believing that epoch-
:  interval conversions have to be performed more often than IP
: packets are assembled.

Benchmarks do not lie.  Also, you are misunderstanding the purpose of
timestamps in the kernel.  Adding or subtracting two of them is
relatively easy.  Converting to a broken down format or doing math
with the complicated forms is much more code intensive.  Dealing with
broken down forms, and all the special cases usually involves
multiplcation and division, when tend to be more computationally
expensive than the checksum.

: One imagines (would love to be pointed to
: actual literature regarding such issues) that most computer time
: handling devolves to requirements for relative intervals and epochs,
: not to stepping outside to any external clock at all.  Certainly the
: hardware clocking of signals is an issue entirely separate from what
: we've been discussing as timekeeping and traceability.  (And note
: that astronomers face much more rigorous requirements in a number of
: ways when clocking out their CCDs.)

Having actually participated in the benchmarks that showed the effects
of inefficient timekeeping, I can say that they have a measurable
effect.  I'll try to find references that the benchmarks generated.

:  Well, the kernel doesn't expect to be able to do that.  Internally,
:  all the FreeBSD kernel does is time based on a monotonically
:  increasing second count since boot.  When time is returned, it is
:  adjusted to the right wall time.
:
: Well, no - the point is that only some limp attempt is made to adjust
: to the right time.

If by some limp attempt you mean returns the correct time then you
are correct.

:  The kernel only worries about leap
:  seconds when time is incremented, since the ntpd portion in the kernel
:  needs to return special things during the leap second.  If there were
:  no leapseconds, then even that computation could be eliminated.  One
:  might think that one could 'defer' this work to gettimeofday and
:  friends, but that turns out to not be possible (or at least it is much
:  more inefficient to do it there).
:
: One might imagine that an interface could be devised that would only
: carry the burden for a leap second when a leap second is actually
: pending.  Then it could be handled like any other rare phenomenon
: that has to be dealt with correctly - like context switching or
: swapping.

You'd think that, but you have to test to see if something was
pending.  And the code actually does that.

:  Really, it is a lot more complicated than just the 'simple' case
:  you've latched onto.
:
: Ditto for Earth orientation and its relation to civil timekeeping.
: I'm happy to admit that getting it right at the CPU level is
: complex.  Shouldn't we be focusing on that, rather than on
: eviscerating mean solar time?

Did I say anything about eviscerating mean solar time?

: A proposal to actually address the intrinsic complications of
: timekeeping is more likely to be received warmly than is a kludge or
: partial workaround.  I suspect it would be a lot more fun, too.

I'm just suggesting that some of the suggested ideas have real
performance issues that means they wouldn't even be considered as
viable options.

:  Kernels aren't written in these languages.  To base one's arugments
:  about what the right type for time is that is predicated on these
:  langauges is a non-starter.
:
: No, but the kernels can implement support for these types and the
: applications can code to them in whatever language.  Again - there is
: a hell of a lot more complicated stuff going on under the hood than
: what would be required to implement a proper model of timekeeping.

True, but timekeeping is one of those areas of the kernel that extra
overhead is called so many times that making it more complex hurts a
lot more than you'd naively think.

Warner


Re: Introduction of long term scheduling

2007-01-05 Thread Tony Finch
On Thu, 4 Jan 2007, Michael Deckers wrote:

This leads me to my question: would it be helpful for POSIX implementors
if each and every UTC timestamp came with the corresponding value of DTAI
attached (instead of DUT1)? Would this even obviate the need for a leap
seconds table?

No, because you need to be able to manipulate representations of times
other than the present, so you need a full leap second table. You might as
well distribute it with the time zone database because both are used by
the same component of the system and the leap second table changes more
slowly than the time zone database.

You don't need to transmit TAI-UTC with every timestamp: for example, NTP
and GPS transmit UTC offset tables and updates comparatively infrequently.

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
WIGHT PORTLAND PLYMOUTH: WEST 4 OR 5, BECOMING CYCLONIC 5 TO 7 FOR A TIME,
THEN NORTHWEST 5 OR 6 LATER. MODERATE OCCASIONALLY ROUGH IN PORTLAND AND
PLYMOUTH. OCCASIONAL RAIN OR DRIZZLE. GOOD OCCASIONALLY MODERATE OR POOR.


Re: Introduction of long term scheduling

2007-01-05 Thread Rob Seaman

Tony Finch wrote:


you need to be able to manipulate representations of times other
than the present, so you need a full leap second table.


Which raises the question of how concisely one can express a leap
second table.  Leap second tables are simply a list of dates - in ISO
8601 or MJD formats, for example.  Additionally you need an
expiration date.  An ISO string is really overkill, MJD can fit into
an unsigned short for the next few decades - but this is really more
than you need for the current standard since not all MJDs are
permitted, only once per month.  Also, we don't need to express leap
seconds that are already known (or never existed), so there is a
useless bias of ~54000 days.  If we start counting months now, a
short integer will suffice to encode each leap second for the next
5000+ years - certainly past the point when monthly scheduling will
no longer suffice.

So, let's see - assume:

   1) all 20th century leap seconds can be statically linked
   2) start counting months at 2000-01-31

We're seeing about 7 leapseconds per decade on average, round up to
10 to allow for a few decades worth of quadratic acceleration (less
important for the next couple of centuries than geophysical noise).
So 100 short integers should suffice for the next century and a
kilobyte likely for the next 500 years.  Add one short for the
expiration date, and a zero short word for an end of record stopper
and distribute it as a variable length record - quite terse for the
next few decades.  The current table would be six bytes (suggest
network byte order):

   0042 003C 

A particular application only needs to read the first few entries it
doesn't already have cached - scan backwards through the list just
until you pass the previous expiration date.  Could elaborate with a
checksum, certificate based signature or other provenance - but these
apply whatever the representation.

To emphasize a recent point:  DUT1 is currently negligible for many
applications.  Which is the same thing as saying that the simple
table of quantized leap seconds is quite sufficient for civil
purposes.  The effect of the ALHP is to inflate the importance of
DUT1 - not just for professional purposes, but for some list of
civil purposes that have yet to be inventoried, e.g., tide tables,
weather forecasts, pointing satellite dishes, aligning sundials (see
article in the Jan 2007 Smithsonian), navigation, aviation, amateur
astronomy, whatever.  I'm not arguing here that these are
intrinsically sufficient to justify retaining leap seconds (although
I believe this to be the case).  Rather, I'm arguing that even under
a caves of steel scenario of Homo sapiens inter-breeding with
Condylura cristata, that there will be applications that require a
explicit DUT1 correction - applications that currently can ignore
this step since UTC is guaranteed to remain within 0.9s of GMT.

So the current requirement is merely to convey a few extra bytes of
state with a six month update cadence.  This suffices to tie civil
epochs (and a useful approximation of Earth orientation) to civil
intervals.

The requirement in the post-leap-second Mad Max future, however,
would be to convey some similar data structure representing a table
of DUT1 tie points accurate to some level of precision with some as-
yet-unspecified cadencing requirement.  The most natural way to
express this might be the nearest round month to when each integral
step in DUT1 occurs, but it should be clear that the requirement for
maintaining and conveying a table of leap seconds is not eliminated,
but rather transmogrified into a similar requirement to maintain and
convey a table of DUT1 values.

Rob Seaman
NOAO


Re: Introduction of long term scheduling

2007-01-05 Thread Steve Allen
On Fri 2007-01-05T21:14:19 -0700, Rob Seaman hath writ:
 Which raises the question of how concisely one can express a leap
 second table.

Gosh, Rob, I remember toggling in the boot program and starting
up the paper tape reader or the 12-inch floppy disc drive, but now
I'm not really sure I understand the need for compactness except in
formats which are specific to devices with very limited capacity.
I routinely carry around 21 GB of rewriteable storage.  It's
hard to imagine that the current generation of GPS receivers
has less than 100 MB and I expect that by the time Galileo is
flying it will be routine for handheld devices to have GB.

I would much prefer to see the IERS produce a rather verbose,
self-describing (to a machine), and extensible set of data products.
Devices which prefer a more compact version are free to compile the
full form into something suitable and specific to their limited needs.
Most devices will be satisfied with only the leap second table.

A leap second table in a working format is just one form of the
navigator's log containing information for the conversion of the
ship's chronometer to and from other, more universal time scales.
Leap seconds are step functions, but in general the chronometer
offsets are likely to be splines of higher order.
That's something which might benefit from having a well-defined
API and a number of examples of code which uses the information
to varying degrees of accuracy.

Some devices will never have clocks guaranteed to be set to within a
second of real time, and for that purpose the POSIX time_t API is
just dandy.  Other applications with access to other time sources
will want to use algorithms of more sophistication according to
their individual needs.

--
Steve Allen [EMAIL PROTECTED]WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-05 Thread Ashley Yakeley

On Jan 5, 2007, at 20:14, Rob Seaman wrote:


An ISO string is really overkill, MJD can fit into
an unsigned short for the next few decades


This isn't really a good idea. Most data formats have been moving
away from the compact towards more verbose, from binary to text to
XML. There are good reliability and extensibility reasons for this,
such as avoiding bit-significance order issues and the ability to
sanity-check it just by looking at it textually.

As the author of a library that consumes leap-second tables, my ideal
format would look something like this: a text file with first line
for MJD of expiration date, and each subsequent line with the MJD of
the start of the offset period, a tab, and then the UTC-TAI seconds
difference. That said, my notion of UTC is restricted to the step-
wise bit after 1972, and others might want more information.

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-05 Thread Rob Seaman

Ashley Yakeley wrote:


As the author of a library that consumes leap-second tables, my ideal
format would look something like this: a text file with first line
for MJD of expiration date, and each subsequent line with the MJD of
the start of the offset period, a tab, and then the UTC-TAI seconds
difference.


As an author (and good gawd, an editor) of an XML standard and schema
to convey transient astronomical event alerts - including potentially
leap seconds - I'd have to presume that XML would do the trick.

The thread was a discussion of appending enough context to an
individual timestamp to avoid the need for providing historical leap
seconds table updates at all.  Someone else pointed out that this
didn't preserve the historical record.  I wanted to additionally
point out that the cost of appending the entire leap second table to
every timestamp would itself remain quite minimal for many years, and
further, that even getting rid of leap seconds doesn't remove the
requirement for conveying information equivalent to this table (on
some cadence to some precision).

The complications are inherent in the distinction between time-of-day
(Earth orientation) and interval time.  The intrinsic cost of
properly supporting both types of time is quite minimal.

Rob


Re: Introduction of long term scheduling

2007-01-04 Thread Michael Deckers
   On 2007-01-03, Poul-Henning Kamp commented on Bulletin D 94:

  That's an interesting piece of data in our endless discussions about
  how important DUT1 really is...

   So it appears that DUT1, an approximation of UT1 - UTC, is not of much use,
   even though it is disseminated with many time signals. On the other hand,
   POSIX implementors need the values of DTAI = TAI - UTC, the count of leap
   seconds, at least for those UTC timestamps in the future as may occur
   during the operation of the system.

   This leads me to my question: would it be helpful for POSIX implementors
   if each and every UTC timestamp came with the corresponding value of DTAI
   attached (instead of DUT1)? Would this even obviate the need for a leap
   seconds table?

   I realise that this would require changes or extensions to the time
   interfaces of POSIX (eg, a time_t value alone could no longer encode a
   complete timestamp). My question is just whether such timestamps,
   indicating both UTC as time-of-day and TAI as interval time, could
   be a viable alternative to the frequent updates of leap second tables.

   Michael Deckers


Re: Introduction of long term scheduling

2007-01-03 Thread Peter Bunclark
On Tue, 2 Jan 2007, Rob Seaman wrote:

 Daniel R. Tobias replies to Poul-Henning Kamp:

  Has anybody calculated how much energy is required to change
  the Earths rotation fast enough to make this rule relevant ?
 
  Superman could do it.  Or perhaps he could nudge the Earth's rotation
  just enough to make the length of a mean solar day exactly equal
  86,400 SI seconds.

 Only briefly.  Consider the LOD plots from http://www.ucolick.org/
 ~sla/leapsecs/dutc.html.  The Earth wobbles like a top, varying its
 speed even if tidal slowing is ignored.

 Actually, rather than being merely a troublemaker, the Moon serves to
 stabilize the Earth's orientation.  The Rare Earth Hypothesis makes
 a strong case that a large Moon and other unlikely processes such as
 continental drift are required for multicellular life to evolve, in
 addition to the more familiar issues of a high system metal content
 and a stable planetary orbit at a distance permitting liquid water.
 Without the Moon, the Earth could nod through large angles, lying on
 its side or perhaps even rotating retrograde every few million
 years.  Try making sense of timekeeping under such circumstances.

 Rob Seaman
 NOAO

Hang on a minute, statistically planets in the Solar System do not have a
large moon and yet are upright; for example Mars comes very close to the
conditions required to generate a leapseconds email exploder.

Pete.


Re: Introduction of long term scheduling

2007-01-03 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Peter Bunclark writes:

 Without the Moon, the Earth could nod through large angles, lying on
 its side or perhaps even rotating retrograde every few million
 years.  Try making sense of timekeeping under such circumstances.

You mean like taking a sequence of atomic seconds, counting them
in a predicatable way and be happy that timekeeping has nothing
to do with geophysics ?

Yeah, I could live with that.

Hang on a minute, statistically planets in the Solar System do not have a
large moon and yet are upright; for example Mars comes very close to the
conditions required to generate a leapseconds email exploder.

As far as I know the atmosphere is far to cold for that :-)

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-03 Thread Peter Bunclark
On Wed, 3 Jan 2007, Poul-Henning Kamp wrote:

 Hang on a minute, statistically planets in the Solar System do not have a
 large moon and yet are upright; for example Mars comes very close to the
 conditions required to generate a leapseconds email exploder.

 As far as I know the atmosphere is far to cold for that :-)

Similar to our polar regions where whales scoff krill all summer long!

A bit more mass - bit more atmospheric pressure, and ok maybe a bit
closer to the Sun...

Of course, life may have flourished on Mars 3 billion years ago and then
the Martians introduced the leap hour and the rest is pre-history...

Pete.


Re: Introduction of long term scheduling

2007-01-03 Thread Magnus Danielson
From: Poul-Henning Kamp [EMAIL PROTECTED]
Subject: Re: [LEAPSECS] Introduction of long term scheduling
Date: Wed, 3 Jan 2007 11:45:52 +
Message-ID: [EMAIL PROTECTED]

 In message [EMAIL PROTECTED], Peter Bunclark writes:

  Without the Moon, the Earth could nod through large angles, lying on
  its side or perhaps even rotating retrograde every few million
  years.  Try making sense of timekeeping under such circumstances.

 You mean like taking a sequence of atomic seconds, counting them
 in a predicatable way and be happy that timekeeping has nothing
 to do with geophysics ?

 Yeah, I could live with that.

Assuming you have corrected for another gravitational field, yes. The current
SI second indirectly assumes a certain gravitational force, we is assumed to be
at sea level whatever level that is. Oh, should we move our Cesiums up and
down with the tides which the moon arranges for us? Moder nature provides so
many nice modulators. :o)

We still depend on geophysics to some degree.

Now, if we could find the mass center of the universe, propell away a really
good atomic clock constallation and use that for our time reference we should
be off to a good start. No?

 Hang on a minute, statistically planets in the Solar System do not have a
 large moon and yet are upright; for example Mars comes very close to the
 conditions required to generate a leapseconds email exploder.

 As far as I know the atmosphere is far to cold for that :-)

No problem. With the heated discussions going on here it would be no problem
maintaining the temperature up. :o)

Cheers,
Magnus


Re: Introduction of long term scheduling

2007-01-03 Thread Tony Finch
On Wed, 3 Jan 2007, Magnus Danielson wrote:

 Assuming you have corrected for another gravitational field, yes. The
 current SI second indirectly assumes a certain gravitational force, we
 is assumed to be at sea level whatever level that is.

Wrong. The SI second is independent of your reference frame, and is
defined according to Einstein's principle of equivalence. What *does*
depend on the gravitational potential at the geoid is TAI (and TT), since
a timescale (unlike a fundamental unit) is relative to a reference frame.

 We still depend on geophysics to some degree.

Note that the standard relativistic transformations between TT, TCG, and
TCB is (since 2000) independent of the geoid. So although the realization
of these timescales is dependent on geophysics (because the atomic clocks
they are ultimately based on are sited on the planet) the mathematical
models try to avoid it.

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
SOLE LUNDY FASTNET IRISH SEA: SOUTHWEST VEERING WEST OR NORTHWEST 7 TO SEVERE
GALE 9, LATER DECREASING 4 OR 5. ROUGH OR VERY ROUGH, OCCASIONALLY HIGH IN
WEST SOLE. RAIN THEN SCATTERED SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-03 Thread Magnus Danielson
From: Tony Finch [EMAIL PROTECTED]
Subject: Re: [LEAPSECS] Introduction of long term scheduling
Date: Wed, 3 Jan 2007 17:38:35 +
Message-ID: [EMAIL PROTECTED]

 On Wed, 3 Jan 2007, Magnus Danielson wrote:
 
  Assuming you have corrected for another gravitational field, yes. The
  current SI second indirectly assumes a certain gravitational force, we
  is assumed to be at sea level whatever level that is.

 Wrong. The SI second is independent of your reference frame, and is
 defined according to Einstein's principle of equivalence.

Good point. Thanks for reminding me.

 What *does* depend on the gravitational potential at the geoid is TAI
 (and TT), since a timescale (unlike a fundamental unit) is relative to a
 reference frame.

When comparing two realizations of an SI second, compensation of the difference
in the reference frame needs to be done. To build up TAI, difference in
gravitational force do need to be compensated out.

  We still depend on geophysics to some degree.

 Note that the standard relativistic transformations between TT, TCG, and
 TCB is (since 2000) independent of the geoid. So although the realization
 of these timescales is dependent on geophysics (because the atomic clocks
 they are ultimately based on are sited on the planet) the mathematical
 models try to avoid it.

Naturally.

Cheers,
Magnus


Re: Introduction of long term scheduling

2007-01-02 Thread Ed Davies

Steve Allen wrote:

On Mon 2007-01-01T21:19:04 +, Ed Davies hath writ:

Why does the One sec at predicted intervals line suddenly
diverge in the early 2500's when the other lines seem to just
be expanding in a sensible way?

...
I suspect that the divergence of the one line indicates that the LOD
has become long enough that 1 s can no longer keep up with the
divergence using whatever predicted interval he chose.  I suspect that
the chosen interval was every three months, for it is in about the
year 2500 that the LOD will require 4 leap seconds per year.


Yes, that make sense.  I worked out what LOD increases he'd have
to be assuming for one or 6 monthly leaps and neither seemed right.
Should have realised that it was in between.

Still, it's a strange assumption, given that TF.640 allows, I
understand, leaps at the end of any month.  Unofficially, the
wording seems to be:


A positive or negative leap-second should be the last second
of a UTC month, but first preference should be given to the end
of December and June, and second preference to the end of March
and September.


Anybody got access to a proper copy and can say whether that's
right or not?  If it is right then the Wikipedia article on leap
seconds needs fixing.


As for the other questions, McCarthy had been producing versions of this
plot since around 1999, but the published record of them is largely
in PowerPoint.  Dr. Tufte has provided postmortems of both  Challenger
and Columbia as testaments to how little that medium conveys.


Indeed, this slide hasn't got us much closer to understanding the
original problem, namely: what is maximum error likely to be over
a decade.

Ed.


Re: Introduction of long term scheduling

2007-01-02 Thread John Cowan
Warner Losh scripsit:

 There's an exception for IERS to
 step in two weeks in advance if the earth's rotation rate hickups.

So if I understand this correctly, there could be as many as 14
consecutive days during which |DUT1|  0.9s before the emergency leap
second can be implemented; consequently, the current guarantee is only
statistical, not absolute.

--
John Cowan  http://www.ccil.org/~cowan  [EMAIL PROTECTED]
After all, would you consider a man without honor wealthy, even if his
Dinar laid end to end would reach from here to the Temple of Toplat?
No, I wouldn't, the beggar replied.  Why is that? the Master asked.
A Dinar doesn't go very far these days, Master.--Kehlog Albran
Besides, the Temple of Toplat is across the street.  The Profit


Re: Introduction of long term scheduling

2007-01-02 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], John Cowan writes:
Warner Losh scripsit:

 There's an exception for IERS to
 step in two weeks in advance if the earth's rotation rate hickups.

So if I understand this correctly, there could be as many as 14
consecutive days during which |DUT1|  0.9s before the emergency leap
second can be implemented; consequently, the current guarantee is only
statistical, not absolute.

But is it physically relevant ?

Has anybody calculated how much energy is required to change
the Earths rotation fast enough to make this rule relevant ?

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-02 Thread Warner Losh
 Warner Losh scripsit:

  There's an exception for IERS to
  step in two weeks in advance if the earth's rotation rate hickups.

 So if I understand this correctly, there could be as many as 14
 consecutive days during which |DUT1|  0.9s before the emergency leap
 second can be implemented; consequently, the current guarantee is only
 statistical, not absolute.

I think I understand differently.  BIH says on Jan 1 that the
Februrary value of DUT1 is 0.2ms.  If the earth hickups, IERS can step
in by Jan 15th and say, no, the real correct value is 0.3ms.

There's no provision for emergency leapseconds.  They just have to be
at the end of the month, and annoucned 8 weeks in advance.  IERS has
actually exceeded this mandate by announcing them ~24 weeks in advance
in recent history.

The IERS bulletin C is a little different than the ITU TF.460:

Leap seconds can be introduced in UTC at the end of the months of  December
or June,  depending on the evolution of UT1-TAI. Bulletin C is mailed every
six months, either to announce a time step in UTC, or to confirm that there
will be no time step at the next possible date.

IERS is issuing Bulletin B as needed.  The latest one can be found at
ftp://hpiers.obspm.fr/iers/bul/buld/bulletind.dat .  Right now DUT1 is
+0.0s until further notice.  From the last few B's, it looks like this
is decreasing at about 300ms per year.  This suggests that the next
leap second will be end of 2008.

Warner


Re: Introduction of long term scheduling

2007-01-02 Thread John Cowan
Warner Losh scripsit:

 There's no provision for emergency leapseconds.  They just have to be
 at the end of the month, and annoucned 8 weeks in advance.  IERS has
 actually exceeded this mandate by announcing them ~24 weeks in advance
 in recent history.

So much the worse.  That means that if the Earth hiccups on March 7, the
value of |DUT1| will not return to normal until May 31.

--
John Cowan[EMAIL PROTECTED]http://ccil.org/~cowan
The whole of Gaul is quartered into three halves.
-- Julius Caesar


Re: Introduction of long term scheduling

2007-01-02 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Tony Fin
ch writes:
On Tue, 2 Jan 2007, Warner Losh wrote:

 Curiously, BIH is currently, at least in the document I have, expected
 to predict what the value of DUT1 is to .1 second at least a month in
 advance so that frequency standard broadcasts can prepare for changes
 of this value a month in advance.  There's an exception for IERS to
 step in two weeks in advance if the earth's rotation rate hickups.

I was amused by the dates in
http://hpiers.obspm.fr/eoppc/bul/buld/bulletind.94

That's an interesting piece of data in our endless discussions about
how important DUT1 really is...

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-02 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
John Cowan [EMAIL PROTECTED] writes:
: Warner Losh scripsit:
:
:  There's no provision for emergency leapseconds.  They just have to be
:  at the end of the month, and annoucned 8 weeks in advance.  IERS has
:  actually exceeded this mandate by announcing them ~24 weeks in advance
:  in recent history.
:
: So much the worse.  That means that if the Earth hiccups on March 7, the
: value of |DUT1| will not return to normal until May 31.

Yes.  But it would take a change in angular momementum would likely
mean that |DUT1| being a little too large would be the least of our
worries.

The earthquake that hit Indonesia last year changed the time of day by
microseconds.  What would cause a sudden jump of hundreds of
milliseconds hurts my brain to contemplate.

Warner


Re: Introduction of long term scheduling

2007-01-02 Thread Ed Davies

Warner Losh wrote:

The IERS bulletin C is a little different than the ITU TF.460:


Leap seconds can be introduced in UTC at the end of the months of  December
or June,  depending on the evolution of UT1-TAI. Bulletin C is mailed every
six months, either to announce a time step in UTC, or to confirm that there
will be no time step at the next possible date.


Unfortunately, these IERS bulletins are dreadfully badly worded and
seem to assume current practice rather than fully defining what they
mean.  E.g., Bulletin C 32, dated 19 July 2006

  http://hpiers.obspm.fr/iers/bul/bulc/bulletinc.dat

says:


NO positive leap second will be introduced at the end of December 2006.


So we still don't know officially if there was a negative leap second
then and we still don't officially know if there will be a leap second
at the end of this month.

  http://hpiers.obspm.fr/iers/bul/bulc/BULLETINC.GUIDE

says:


UTC is defined by the CCIR Recommendation 460-4 (1986). It differs
from TAI by an integral number of seconds, in such a way that UT1-UTC stays
smaller than 0.9s in absolute value. The decision to introduce a leap second
in UTC to meet this condition is the responsability of the IERS. According to
the CCIR Recommendation, first preference is given to the opportunities at the
end of December and June,and second preference to those at the end of March
and September. Since the system was introduced in 1972 only dates in June and
December have been used.


Again, this is the truth but not the whole truth as it doesn't mention
the third preference opportunities at the ends of other months - but
it'll be a while until those are needed.

(Also, they can't spell responsibility :-)

Ed.


Re: Introduction of long term scheduling

2007-01-02 Thread Zefram
Warner Losh wrote:
 Right now DUT1 is
+0.0s until further notice.  From the last few B's, it looks like this
is decreasing at about 300ms per year.  This suggests that the next
leap second will be end of 2008.

The way DUT1 is behaving at the moment, it looks like an ideal time for
IERS to experiment with scheduling further ahead.  It should be easy
to commit today to having no leap second up to and including 2007-12,
as a first step.  Well, we can hope.

-zefram


Re: Introduction of long term scheduling

2007-01-02 Thread James Maynard

Ed Davies wrote:


Still, it's a strange assumption, given that TF.640 allows, I
understand, leaps at the end of any month.  Unofficially, the
wording seems to be:


A positive or negative leap-second should be the last second
of a UTC month, but first preference should be given to the end
of December and June, and second preference to the end of March
and September.


Anybody got access to a proper copy and can say whether that's
right or not?  If it is right then the Wikipedia article on leap
seconds needs fixing.



The text you quoted is taken exactly fromITU-R Recommendation TF.640-4,
Annex I (Time Scales), paragraph D (DUT1), sub-paragraph 2
(Leap-seconds):

2.1   A positive or negative leap-second should be the last second of
a UTC month, but first preference should be given to the end of
December and June, and second preference to the end of March
and September.

2.2   A positive leap-second begins at 23h 59m 60s and ends at 0h 0m 0s
of the first day of the following month. In the case of a negative
leap-seoond, 23h 59m 58s will be followed one second later by 0h 0m 0s
of the first day of the following month (see Annex III).

2.3   The IERS should decide upon and announce the introduction of a
leap-second, such announcemtn to be made at least eight weeks in advance.


--
James Maynard, K7KK
Salem, Oregon, USA


Re: Introduction of long term scheduling

2007-01-02 Thread Daniel R. Tobias
On 2 Jan 2007 at 19:40, Poul-Henning Kamp wrote:

 Has anybody calculated how much energy is required to change
 the Earths rotation fast enough to make this rule relevant ?

Superman could do it.  Or perhaps he could nudge the Earth's rotation
just enough to make the length of a mean solar day exactly equal
86,400 SI seconds.

--
== Dan ==
Dan's Mail Format Site: http://mailformat.dan.info/
Dan's Web Tips: http://webtips.dan.info/
Dan's Domain Site: http://domains.dan.info/


Re: Introduction of long term scheduling

2007-01-02 Thread Rob Seaman

Daniel R. Tobias replies to Poul-Henning Kamp:


Has anybody calculated how much energy is required to change
the Earths rotation fast enough to make this rule relevant ?


Superman could do it.  Or perhaps he could nudge the Earth's rotation
just enough to make the length of a mean solar day exactly equal
86,400 SI seconds.


Only briefly.  Consider the LOD plots from http://www.ucolick.org/
~sla/leapsecs/dutc.html.  The Earth wobbles like a top, varying its
speed even if tidal slowing is ignored.

Actually, rather than being merely a troublemaker, the Moon serves to
stabilize the Earth's orientation.  The Rare Earth Hypothesis makes
a strong case that a large Moon and other unlikely processes such as
continental drift are required for multicellular life to evolve, in
addition to the more familiar issues of a high system metal content
and a stable planetary orbit at a distance permitting liquid water.
Without the Moon, the Earth could nod through large angles, lying on
its side or perhaps even rotating retrograde every few million
years.  Try making sense of timekeeping under such circumstances.

Rob Seaman
NOAO


Re: Introduction of long term scheduling

2007-01-02 Thread Rob Seaman

Poul-Henning Kamp wrote:


That's an interesting piece of data in our endless discussions
about how important DUT1 really is...


The point is that by allowing it to grow without reasonable bound,
DUT1 would gain an importance it never had before.


Re: Introduction of long term scheduling

2007-01-01 Thread Ed Davies

Rob Seaman wrote:

...  Obviously it would take at least N years to introduce a new
reporting requirement of N years in advance (well, N years minus six
months).


Sorry, maybe I'm being thick but, why?  Surely the IERS could announce
all the leap seconds in 2007 through 2016 inclusive this week then
those for 2017 just before the end of this year, and so on.  We'd have
immediate 10 year scheduling.


I suspect it would be exceptionally interesting to
everyone, no matter what their opinion on our tediously familiar
issues, to know how well these next seven or so leap seconds could be
so predicted, scheduled and reported.


Absolutely, it would be very interesting to know.  I suspect though,
that actually we (the human race) don't have enough data to really
know a solid upper bound to possible error and that any probability
distribution would really be not much more than an educated guess.

Maybe a few decades of detailed study has not been enough to see
wilder swings - to eliminate the unknown unknowns, if you like.


If the 0.9s limit were to be
relaxed - how much must that be in practice?  Are we arguing over a
few tenths of a second coarsening of the current standard?  That's a
heck of a lot different than 36,000 tenths.


Maybe we can turn this question round.  Suppose the decision was made
to simplistically schedule a positive leap second every 18 months for
the next decade, what would be the effect of the likely worst case
error?  First, what could the worst case error be?  Here's my guess.
If it turned out that no leap seconds were required then we'd be 6
seconds out.  If we actually needed one every nine months we'd be out
by about 6 seconds the other way.  So the turned around question would
be: assuming we are going to relax the 0.9 seconds limit, how much of
an additional problem would it be if it was increased by a factor of
10 or so, in the most likely worst case?

As Rob has pointed out recently on the list, 1 second in time equates
to 15 seconds of arc in right ascension at the celestial equator for
telescope pointing.  Nine seconds in time is therefore 2.25 arc
minutes.  For almost all amateur astronomers this error would be
insignificant as it's smaller than their field of view with a normal
eyepiece but, more importantly, the telescope is usually aligned by
pointing at stars anyway rather than by setting the clock at all
accurately.  For the professionals I'm not so sure but, for context,
Hubble's coarse pointing system aims the telescope to an accuracy of
about 1 arc minute before handing off control to the fine guidance
sensors.

For celestial navigation on the Earth, a nine second error in time
would equate to a 4.1 km error along the equator.  Worth considering.

My guess would be that there would be applications which would need
to take account of the difference which currently don't.  Is it really
likely to be a problem, though?

Remember that this is not a secular error, by the end of, say, 2009
we'd be beginning to get an idea of how things are going and would be
able to start feeding corrections into the following decade.

So, while it would be nice to know a likely upper bound on the
possible errors, is a back of an envelope guess good enough?

Happy perihelion,

Ed.


Re: Introduction of long term scheduling

2007-01-01 Thread Steve Allen
On Mon 2007-01-01T17:42:11 +, Ed Davies hath writ:
 Sorry, maybe I'm being thick but, why?  Surely the IERS could announce
 all the leap seconds in 2007 through 2016 inclusive this week then
 those for 2017 just before the end of this year, and so on.  We'd have
 immediate 10 year scheduling.

For reasons never explained publicly this notion was shot down very
early in the process of the WP7A SRG.  It would almost certainly
exceed the current 0.9 s limit, and in so doing it would violate the
letter of ITU-R TF.460.

The IERS may not be a single entity so much as a confederation of
organizations competing for scientific glory and using the umbrella to
facilitate funding from each of their national governments.  Even if
the IERS were monolithic they would have to obtain approval for such a
change from the ITU-R, IAU, IUGG, and FAGS.  Given the tri/quadrennial
meeting schedules it seems unlikely that the IERS could obtain
approval much before year 2010.

 Maybe we can turn this question round.  Suppose the decision was made
 to simplistically schedule a positive leap second every 18 months for
 the next decade, what would be the effect of the likely worst case
 error?  First, what could the worst case error be?

McCarthy pretty much answered this question in 2001 as I reiterate here
http://www.ucolick.org/~sla/leapsecs/McCarthy.html

 As Rob has pointed out recently on the list, 1 second in time equates
 to 15 seconds of arc in right ascension at the celestial equator for
 telescope pointing.
...
 For the professionals I'm not so sure but

Give us a few years of warning and I think we can cope.  No telescope
I know uses ICRS, we're all still using FK5 and/or FK4.  That means we
astronomers already know (or at least ought to know *) that we all
have to do a software update.

 For celestial navigation on the Earth, a nine second error in time
 would equate to a 4.1 km error along the equator.  Worth considering.

The format of the almanacs would be changed along with the change
in UTC such that by including one more addition there would be
no worse error than now.  This would be a change much smaller in
magnitude than what the Admiralty did in 1833.

 Is it really likely to be a problem, though?

I think not.  It's hard to prove not.
None of the agencies involved has the funding to mount a survey
which would motivate all affected parties to investigate.

(*) While standing near the UTC poster at ADASS I was accosted by a
software engineer whose PI had instructed that all observation times
be reduced to heliocentric UTC.  Upon discussion it became clear
that the PI had not clearly distinguished between heliocentric and
barycentric.  Furthermore, there was no concept that UTC is only
defined at the surface of the earth and that the only suitable time
scales are TCB and TDB.  (TDB would be the natural result because
ticks along with UTC and because that's what the JPL ephemerides use.)
The need for pedagogy never ends.

--
Steve Allen [EMAIL PROTECTED]WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-01 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Steve Allen writes:

McCarthy pretty much answered this question in 2001 as I reiterate here
http://www.ucolick.org/~sla/leapsecs/McCarthy.html

What exactly is the Y axis on this graph ?

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-01 Thread Steve Allen
On Mon 2007-01-01T19:29:19 +, Poul-Henning Kamp hath writ:
 McCarthy pretty much answered this question in 2001 as I reiterate here
 http://www.ucolick.org/~sla/leapsecs/McCarthy.html

 What exactly is the Y axis on this graph ?

Only McCarthy can say for sure.
Maybe someone elsewho was at the GSIC meeting could give a better idea.

My impression is that McCarthy generated a pseudorandom sequence of
LOD values based on the known power spectrum of the LOD fluctutations
and then applied the current UT1 prediction filters to that to see
how wrong UT1-UTC was likely to get.  I suspect it was a rather
back of the envelope kind of calculation that was not repeated
because the notions of scheduling that it posited were shot down.

As a routine matter of operation the IERS would undoubtedly want
to put some effort into verifying that new software for making such
predictions was well reviewed and tested.

Oh, and the lawyer in me just asserted a loophole in my previous post.

One could say that it was never possible for the BIH/IERS to guarantee
that its leap second scheduling could meet the 0.7 s and then later
0.9 s specification because they could not be held responsible for
things that the earth might do.  As such the IERS could conceivably
start unilaterally issuing full decade scheduling of leap seconds and
claim that it *was* acting in strict conformance with ITU-R TF.460.

In civil matters this is the sort of action which would later be
tested in court if it were found to have adverse effects.  In the
matter of earth rotation it seems unlikely that there could be any
penalties, and if there were a general consensus that this be the
right thing to do then the IERS could probably act with impunity in
advance of official approval from all agencies.

--
Steve Allen [EMAIL PROTECTED]WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-01 Thread Magnus Danielson
From: Poul-Henning Kamp [EMAIL PROTECTED]
Subject: Re: [LEAPSECS] Introduction of long term scheduling
Date: Mon, 1 Jan 2007 19:29:19 +
Message-ID: [EMAIL PROTECTED]

Poul-Henning,

 In message [EMAIL PROTECTED], Steve Allen writes:

 McCarthy pretty much answered this question in 2001 as I reiterate here
 http://www.ucolick.org/~sla/leapsecs/McCarthy.html

 What exactly is the Y axis on this graph ?

Unless you have a subtle point, I interprent it to be in seconds even if they
are incorrectly indicated (s or seconds instead of sec would have been
correct).

If you have subtle point, I'd love to hear it.

Cheers,
Magnus


Re: Introduction of long term scheduling

2007-01-01 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Steve Allen writes:

One could say that it was never possible for the BIH/IERS to guarantee
that its leap second scheduling could meet the 0.7 s and then later
0.9 s specification because they could not be held responsible for
things that the earth might do.  As such the IERS could conceivably
start unilaterally issuing full decade scheduling of leap seconds and
claim that it *was* acting in strict conformance with ITU-R TF.460.

Considering that ITU has no power over IERS, IERS is only bound
by the letter of TF.460 as far as they have volutarily promised
to be, and consequently, they could just send a letter to ITU
and say we'll do it this way from MMDD, if you disagree,
then figure something else out.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-01 Thread Poul-Henning Kamp
In message [EMAIL PROTECTED], Magnus Danielson wr
ites:
From: Poul-Henning Kamp [EMAIL PROTECTED]
Subject: Re: [LEAPSECS] Introduction of long term scheduling
Date: Mon, 1 Jan 2007 19:29:19 +
Message-ID: [EMAIL PROTECTED]

Poul-Henning,

 In message [EMAIL PROTECTED], Steve Allen writes:

 McCarthy pretty much answered this question in 2001 as I reiterate here
 http://www.ucolick.org/~sla/leapsecs/McCarthy.html

 What exactly is the Y axis on this graph ?

Unless you have a subtle point, I interprent it to be in seconds even if they
are incorrectly indicated (s or seconds instead of sec would have been
correct).

If you have subtle point, I'd love to hear it.

Not even close to a subtle point, I simply cannot figure out what the
graph shows...

The sawtooth corresponding to the prediction interval raises a big red
flag for me as to the graphs applicability to reality.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-01 Thread Ed Davies

Poul-Henning Kamp wrote:

If you have subtle point, I'd love to hear it.


Not even close to a subtle point, I simply cannot figure out what the
graph shows...


Me too.  Is this an analysis or a simulation?  What are the
assumptions?  What predicted intervals does he mean?

The bullet points above are very confusing as well.

What does large discontinuities possible mean?  Ignoring
any quibble about the use of the word discontinuities,
does he mean more than one leap second at a particular event?
Why would anybody want to do that? - at least before we're
getting to daily leap seconds which is well off to the right
of his graph (50 000 years, or so, I think).

Why does the One sec at predicted intervals line suddenly
diverge in the early 2500's when the other lines seem to just
be expanding in a sensible way?

Ed.


Re: Introduction of long term scheduling

2007-01-01 Thread Ed Davies

Steve Allen wrote:

On Mon 2007-01-01T17:42:11 +, Ed Davies hath writ:

Sorry, maybe I'm being thick but, why?  Surely the IERS could announce
all the leap seconds in 2007 through 2016 inclusive this week then
those for 2017 just before the end of this year, and so on.  We'd have
immediate 10 year scheduling.


For reasons never explained publicly this notion was shot down very
early in the process of the WP7A SRG.  It would almost certainly
exceed the current 0.9 s limit, and in so doing it would violate the
letter of ITU-R TF.460.


Yes, I was assuming exceeding the 0.9 s limit, as I'm sure the rest
of my message made clear.  We are discussing this as an alternative
to, for all intents and purposes, scrapping leaps altogether and
blowing the limit for all time, so I don't see this as a problem.

Ed.


Re: Introduction of long term scheduling

2007-01-01 Thread Steve Allen
On Mon 2007-01-01T21:19:04 +, Ed Davies hath writ:
 Why does the One sec at predicted intervals line suddenly
 diverge in the early 2500's when the other lines seem to just
 be expanding in a sensible way?

Upon looking closer I see a 200 year periodicity in the plot.
I begin to suspect that rather than run a pseudorandom sequence of LOD
based on the power spectrum he instead took the past 2 centuries of
LOD variation around the linear trend and just kept repeating those
variations added to an ongoing linear trend.

I suspect that the divergence of the one line indicates that the LOD
has become long enough that 1 s can no longer keep up with the
divergence using whatever predicted interval he chose.  I suspect that
the chosen interval was every three months, for it is in about the
year 2500 that the LOD will require 4 leap seconds per year.

As for the other questions, McCarthy had been producing versions of this
plot since around 1999, but the published record of them is largely
in PowerPoint.  Dr. Tufte has provided postmortems of both  Challenger
and Columbia as testaments to how little that medium conveys.

--
Steve Allen [EMAIL PROTECTED]WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m