Re: Introduction of long term scheduling

2007-01-15 Thread Tony Finch
On Mon, 15 Jan 2007, Peter Bunclark wrote:
>
> > http://www.eecis.udel.edu/~mills/ipin.html
>
> That page does not seem to mention UTC...

Look at the slides.

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
BISCAY FITZROY: VARIABLE 4, BECOMING SOUTHWESTERLY 5 TO 7 IN NORTHWEST
FITZROY. MODERATE OR ROUGH, OCCASIONALLY VERY ROUGH. SHOWERS. GOOD.


Re: Introduction of long term scheduling

2007-01-15 Thread Peter Bunclark
On Fri, 12 Jan 2007, Tony Finch wrote:

> According to the slides linked from Dave Mills's "Timekeeping in the
> Interplanetary Internet" page, they are planning to sync Mars time to UTC.
> http://www.eecis.udel.edu/~mills/ipin.html
>
That page does not seem to mention UTC... it does mention running at a
constant rate relative to TAI.  It's not explicit, but one hopes they are
considering not running UTC clocks (and converting to UTC when necessary
in userland...).

Peter.


Re: Introduction of long term scheduling

2007-01-12 Thread Steve Allen
On Fri 2007-01-12T18:35:55 +, Tony Finch hath writ:
> According to the slides linked from Dave Mills's "Timekeeping in the
> Interplanetary Internet" page, they are planning to sync Mars time to UTC.
> http://www.eecis.udel.edu/~mills/ipin.html

Neverminding the variations on Mars with its rather more eccentric
orbit, the deviations from uniformity of rate of time on earth alone
create an annual variation of almost 2 ms between TT and TDB.  This is
also ignoring variations in time signal propagation through the solar
wind when Mars is near superior conjunction.

To some applications 2 ms in a year is nothing.  From an engineering
standpoint a variation of 2 ms in a year on Mars is certainly better
than any time scale that could be established there in lieu of landing
a cesium chronometer.  To other applications 2 ms in a year may be
intolerably large.

So the question remains: At what level do distributed systems need
access to a time scale which is uniform in their reference frame?
And my question: Can something as naive as POSIX time_t really serve
all such applications, even the ones on earth, for the next 600 years?

--
Steve Allen <[EMAIL PROTECTED]>WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-12 Thread Tony Finch
On Mon, 8 Jan 2007, Steve Allen wrote:
>
> Don't forget that UTC and TAI are coordinate times which are difficult
> to define off the surface of the earth.  For chronometers outside of
> geostationary orbit the nonlinear deviations between the rate of a local
> oscillator and an earthbound clock climb into the realm of
> perceptibility. There seems little point in claiming to use a uniform
> time scale for a reference frame whose rate of proper time is notably
> variable from your own.

According to the slides linked from Dave Mills's "Timekeeping in the
Interplanetary Internet" page, they are planning to sync Mars time to UTC.
http://www.eecis.udel.edu/~mills/ipin.html

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
LUNDY FASTNET IRISH SEA: SOUTHWEST 6 TO GALE 8. ROUGH OR VERY ROUGH. RAIN OR
DRIZZLE. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-11 Thread Clive D.W. Feather
Rob Seaman said:
> Feather's encoding is a type of compression.  GZIP won't buy you
> anything extra.

Actually, it might with longer tables. For example, LZW (as used by Unix
compress) can be further compressed using a Huffman-based compressor.

> I'll join the rising chorus that thinks it need
> not appear in every packet.

Phew.

> I'd also modify Feather encoding to delta backwards from the
> expiration time stamp.

Interesting idea.

--
Clive D.W. Feather  | Work:  <[EMAIL PROTECTED]>   | Tel:+44 20 8495 6138
Internet Expert | Home:  <[EMAIL PROTECTED]>  | Fax:+44 870 051 9937
Demon Internet  | WWW: http://www.davros.org | Mobile: +44 7973 377646
THUS plc||


Re: Introduction of long term scheduling

2007-01-09 Thread Rob Seaman
I tried to send this a few times the other day, but the list rejected  
it.  Figured I'd try one more time as a mail check as much as  
anything else.  Obviously not a particularly meaningful message.


Rob
--

Poul-Henning Kamp wrote:

And next thing, somebody is going to argue for GZIP encoding of the  
list


Feather's encoding is a type of compression.  GZIP won't buy you  
anything extra.


As I said, my original suggestion was in the context of the  
discussion at the time.  One certainly can convey the leap second  
table for the next several decades in a quite concise format - should  
that be necessary.  I'll join the rising chorus that thinks it need  
not appear in every packet.


I like Zefram's additional suggestion that each bit of leap table DNA  
be self-describing.  We're pretty much reinventing genetic  
transcription and translation, complete with stopper sequences.  One  
could likely base a really interesting rubber time scale on a DNA/RNA  
model.  That this isn't the problem we face shouldn't take away from  
the cleverness of its solution :–)


I'd also modify Feather encoding to delta backwards from the  
expiration time stamp.  This would not only permit applications to  
truncate transcription after a very small number of bytes, but could  
potentially extend proleptically indefinitely backward.


Speaking of which, Tony Finch wrote:

The main requirement for a proleptic timescale is that it is useful  
for most practical purposes.


I've worked on projects that had requirements this broadly  
expressed.  I hope to avoid that opportunity antileptically ("in the  
future").


A coworker has chosen "proleptic" as his word of the day.  He says  
the challenge will be to use it in a sentence without mentioning his  
"Grandma's seizure".


Helpful definition of proleptically from thefreedictionary.com:  "in  
a proleptical manner".


Rob



Re: Introduction of long term scheduling

2007-01-09 Thread matsakis . demetrios
As many have pointed out on this forum, these various timescales do have
very specific meanings which often fade at levels coarser than a few
nanoseconds (modulo 1 second), and which at times are misapplied at the
1-second and higher level.

GPS Time is technically an "implicit ensemble mean".  You can say it exists
inside the Kalman Filter at the GPS Master Control Station as a linear
combination of corrected clock states.  But there is no need for the control
computer to actually compute it as a specific number, and that's why it is
implicit.  Every GPS clock is a realization of GPS Time once the receiver
applies the broadcast corrections.   GPS Time is steered to UTC(USNO), and
generally stays within a few nanoseconds of it, modulo 1 second.  UTC(USNO)
approximates UTC, and so it goes.

The most beautiful reference to GPS Time is "The Theory of the GPS Composite
Clock" by Brown, in the Proceedings of the Institute of Navigation's 1991
ION-GPS meeting.  But others, including me, routinely publish plots of it.

--Original Message-
From: Leap Seconds Issues [mailto:[EMAIL PROTECTED] On Behalf Of
Ashley Yakeley
Sent: Tuesday, January 09, 2007 2:22 AM
To: LEAPSECS@ROM.USNO.NAVY.MIL
Subject: Re: [LEAPSECS] Introduction of long term scheduling

On Jan 8, 2007, at 22:57, Steve Allen wrote:

> GPS is not (TAI - 19)

What is GPS time, anyway? I had assumed someone had simply defined GPS to be
TAI - 19, and made the goal of the satellites to approximate GPS time, i.e.
that GPS and TAI are the same (up to isomorphism in some "category of
measurements"). But apparently not?
Are the satellite clocks allowed to drift, or do they get corrected?

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-09 Thread Zefram
Steve Allen wrote:
>But it is probably safer to come up
>with a name for "the timescale my system clock keeps that I wish were
>TAI but I know it really is not".

True.  I can record timestamps in TAI(bowl.fysh.org), and by logging
all its NTP activity I could retrospectively do a more precise
TAI(bowl.fysh.org)<->TAI conversion than was possible in real time.
To be rigorous we need to reify an awful lot more timescales than we
do currently.

Another aspect of rigour that I'd like to see is uncertainty bounds
on timestamps.  With NTP, as things stand now, the system clock does
carry an error bound, which can be extracted using ntp_adjtime().
(Btw, another nastiness of the ntp_*() interface is that ntp_adjtime()
doesn't return the current clock reading on all systems.  On affected
OSes it is impossible to atomically acquire a clock reading together
with error bounds.)  If I want a one-off TAI reading in real time, I can
take the TAI(bowl.fysh.org) reading along with the error bound, and then
instead of claiming an exact TAI instant I merely claim that the true
TAI time is within the identified range.  In that sense it *is* possible
to get true TAI in real time, just not with the highest precision.

If I have a series of timestamps from the same machine then for comparing
them I don't want individual error bounds on them.  The ranges would
overlap and I'd be unable to sort them properly.  This is another reason
to reify TAI(bowl.fysh.org): the errors in the TAI readings are highly
correlated, and to know that I can sort the timestamps naively I need
to know that correlation, namely that they came from the same clock.
Even in retrospect, when I can do more precise coversions to true TAI, I
need to maintain the correlation, because the intervals between timestamps
may still be smaller than the uncertainty with which I convert to TAI.

>(or at least it is if you are one of Tom Van Baak's kids.  See
>http://www.leapsecond.com/great2005/ )

Cool.  I'd have loved such toys when I was that age.  My equivalent was
that I got to experiment with a HeNe laser, as my father is a physicist.
Now I carry a diode laser in my pocket.  When TVB's children grow up,
they'll probably carry atomic watches.

>There seems little point in claiming to use a uniform time scale for a
>reference frame whose rate of proper time is notably variable from
>your own.

Hmm.  Seems to me there's use in it if you do a lot of work relating to
that reference frame or if you exchange timestamps with other parties
who use that reference frame.  Just need to keep it in its conceptual
place: don't assume that it's a suitable timescale for measuring local
interval time.  Another reason to reify a local timescale.

>   what happens when the operations of distributed systems demand
>an even tighter level of sync than NTP can provide?

Putting on my futurist hat, I predict the migration of time
synchronisation into the network hardware.  Routers at each end of a
fibre-optic cable could do pretty damn tight synchronisation at the
data-link layer, aided by the strong knowledge that the link is the
same length in both directions.  Do this hop by hop to achieve networked
Einstein synchronisation.  (And here come another few thousand timescales
for us to process.)

>What if general purpose systems do not have a means of acknowledging
>and dealing with the fact that their system chronometer has deviated
>from the agreeable external time,

This has long been the case.  Pre-NTP Unix APIs have no way to admit
that the clock reading is bogus, and systems like Windows still have no
concept of clock accuracy.  What happens is that we get duff timestamps,
and some applications go wrong.  The number of visible faults that result
from this is surprisingly small, so far.

-zefram


Re: Introduction of long term scheduling

2007-01-08 Thread Magnus Danielson
From: Steve Allen <[EMAIL PROTECTED]>
Subject: Re: [LEAPSECS] Introduction of long term scheduling
Date: Mon, 8 Jan 2007 22:57:23 -0800
Message-ID: <[EMAIL PROTECTED]>

Steve,

> On Mon 2007-01-08T01:54:56 +, Zefram hath writ:
> > Possibly TT could also be used in some form, for interval calculations
> > in the pre-caesium age.
>
> Please do not consider the use of TT as a driver for the development
> of any sort of commonplace API.  In the far past no records were made
> using TT for the timestamp, and nobody ever will use TT except when
> comparing with ancient eclipse records.
>
> I agree that system time should increment in as uniform a fashion as
> possible, but amplifying reasons recently listed here I disagree that
> anyone should specify that the operating system uses TAI.  TAI is
> TAI, and nothing else is TAI.  Note that even in the history of TAI
> itself there have been serious discussions and changes in the scale
> unit of TAI to incorporate better notions of the underlying physics.
>
> GPS is not (TAI - 19), UTC is not (TAI - 33).  Millions of computers
> claiming to be running using TAI as their system time, even if they
> have rice-grain-sized cesium resonators as their motherboard clocks,
> will not make that statement true.  Instead it will simply obscure
> the concept of TAI much worse than it is misunderstood now.

All systems stating that they have a TAI or UTC time doesn't say they have
"the" TAI or "the" UTC. What is meant is that their local timescale attempts
to approximate TAI or UTC (respectively). This is indeed a fine point to
clarify. In that sense is GPS time not TAI - 19s, however the GPS timescale
is a TAI approximation offset by 19 seconds. The deviation of GPS timescale
from that of TAI is to be found in monthly documentation.

I don't think it is very helpfull for the larger audience to say that "your
computer does not have UTC", even if I agree technically with you. We just have
to be carefull with language. It is indeed usefull to use the terms "TAI time"
and "UTC time" and that their representations is implicitly always a local
representation and approximations of said timescales.

The one thing we must make clear that UTC is a derivate timescale from that of
TAI and that the difference between them does change over time.

So, while I agree that we should technically should be sure to keep things
very separatly, outside of a very small group of people, you benefit from
having a reduced set of terms in order to get the point though, and as it is
that is hard enought. The richness and fine detail of the terms we have should
not be lost (infact there is still room for improvements), but we should make
sure that we agree on how the reduced description is to be interprented.

I see nothing wrong with statements such as "my computer has UTC" since I never
will beleive that it will have THE UTC but rather some traceability to UTC, or
at least a very poor attempt (reset every now and then). I will naturally
challenge the traceability of that UTC representation (and indeed I have done
so to much amusement of myself and confusion of certain authorities and
govrement agencies).

Let's make sure we can win the wars we can win, and make a few smart comments
along the way to make people know that there is a more refined view if they
need to.

Cheers,
Magnus


Re: Introduction of long term scheduling

2007-01-08 Thread Ashley Yakeley

On Jan 8, 2007, at 22:57, Steve Allen wrote:


GPS is not (TAI - 19)


What is GPS time, anyway? I had assumed someone had simply defined
GPS to be TAI - 19, and made the goal of the satellites to
approximate GPS time, i.e. that GPS and TAI are the same (up to
isomorphism in some "category of measurements"). But apparently not?
Are the satellite clocks allowed to drift, or do they get corrected?

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-08 Thread Steve Allen
On Mon 2007-01-08T01:54:56 +, Zefram hath writ:
> Possibly TT could also be used in some form, for interval calculations
> in the pre-caesium age.

Please do not consider the use of TT as a driver for the development
of any sort of commonplace API.  In the far past no records were made
using TT for the timestamp, and nobody ever will use TT except when
comparing with ancient eclipse records.

I agree that system time should increment in as uniform a fashion as
possible, but amplifying reasons recently listed here I disagree that
anyone should specify that the operating system uses TAI.  TAI is
TAI, and nothing else is TAI.  Note that even in the history of TAI
itself there have been serious discussions and changes in the scale
unit of TAI to incorporate better notions of the underlying physics.

GPS is not (TAI - 19), UTC is not (TAI - 33).  Millions of computers
claiming to be running using TAI as their system time, even if they
have rice-grain-sized cesium resonators as their motherboard clocks,
will not make that statement true.  Instead it will simply obscure
the concept of TAI much worse than it is misunderstood now.

For simplicity, sure, let earthbound systems try to track TAI.  For
simple systems just let the simple algorithm assume that the
tolerances are large enough that it is safe to make time conversions
as if the timestamps were TAI.  But it is probably safer to come up
with a name for "the timescale my system clock keeps that I wish were
TAI but I know it really is not".

Don't forget that UTC and TAI are coordinate times which are difficult
to define off the surface of the earth.  For chronometers outside of
geostationary orbit the nonlinear deviations between the rate of a
local oscillator and an earthbound clock climb into the realm of
perceptibility.  Demonstrating that the proper time of a chronometer
is notably different from the coordinate time of TAI is now childsplay
(or at least it is if you are one of Tom Van Baak's kids.  See
http://www.leapsecond.com/great2005/ )
There seems little point in claiming to use a uniform time scale for a
reference frame whose rate of proper time is notably variable from
your own.

Right now most general purpose computing systems with clocks are on
the surface of the earth, so counting UTC as subdivisions of days
makes sense.  Off the surface of the earth it isn't clear why it's
relevant to demand that the operating system time scale should result
in formatted output that resembles how things were done with the
diurnal rhythm of that rock over there.

Right now NTP can keep systems syncronized to a few microseconds, but
no two clocks ever agree.  Even if we stick to discussing systems on
earth, what happens when the operations of distributed systems demand
an even tighter level of sync than NTP can provide?

It is relatively easy to calculate when the lack of sync between clock
and sun will become a problem if leap seconds are abandoned: around
600 years.

What if general purpose systems do not have a means of acknowledging
and dealing with the fact that their system chronometer has deviated
from the agreeable external time, or if there is no agreeable external
time?

I don't think that handling leap seconds is the biggest issue that the
evolution of general purpose computer timekeeping is going to face,
and I think that not facing the other issues soon will result in
problems well before 600 years have elapsed.

--
Steve Allen <[EMAIL PROTECTED]>WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-08 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Tony Finch <[EMAIL PROTECTED]> writes:
: On Sat, 6 Jan 2007, M. Warner Losh wrote:
: >
: > Unfortunately, the kernel has to have a notion of time stepping around
: > a leap-second if it implements ntp.
:
: Surely ntpd could be altered to isolate the kernel from ntp's broken
: timescale (assuming the kernel has an atomic seconds count timescale)

ntpd is the one that mandates it.

One could use an atomic scale in the kernel, but nobody that I'm aware
of does.

Warner


Re: Introduction of long term scheduling

2007-01-08 Thread Tony Finch
On Sat, 6 Jan 2007, M. Warner Losh wrote:
>
> Unfortunately, the kernel has to have a notion of time stepping around
> a leap-second if it implements ntp.

Surely ntpd could be altered to isolate the kernel from ntp's broken
timescale (assuming the kernel has an atomic seconds count timescale)

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
NORTHWEST BISCAY: SOUTHWEST 5 TO 7, OCCASIONALLY GALE 8. VERY ROUGH. RAIN OR
SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-08 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Zefram writes:
>Poul-Henning Kamp wrote:
>>We certainly don't want to transmit the leap-second table with every
>>single NTP packet, because, as a result, we would need to examine
>>it every time to see if something changed.
>
>Once we've got an up-to-date table, barring faults, we only need to check
>to see whether the table has been extended further into the future.

Wrong.  Somebody somewhere will fatfinger the table and that delta
needs to be revokable.

>>Furthermore, you will not getaround a strong signature on the
>>leap-second table, because if anyone can inject a leap-second table
>>on the internet, there is no end to how much fun they could have.
>
>This issue applies generally with time synchronisation, does it not?
>NTP has authentication mechanisms.

Yes, and nobody uses them because they are too hard to set up.

But the crypto overhead is yet another reason why the leap table
shall not be sent in each and every NTP packet.


--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-08 Thread Tony Finch
On Mon, 8 Jan 2007, Zefram wrote:

> Possibly TT could also be used in some form, for interval calculations
> in the pre-caesium age.

In that case you'd need a model (probably involving rubber seconds) of the
TT<->UT translation. It doesn't seem worth doing to me because of the
small number of applications that care about that level of precision that
far in the past.

The main requirement for a proleptic timescale is that it is useful for
most practical purposes. Therefore it should not be excessively
complicated, such as requiring a substantially different implementation of
time in the past to time in the present. What we actually did in the past
was make a smooth(ish) transition from universal time to atomic time, so
it would seem reasonable to implement (a simplified version of) that in
our systems. In practice this means saying that we couldn't tell the
difference between universal time and uniform time before a certain date,
which we model as a leap second offset of zero.

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
BAILEY: SOUTHWEST 5 TO 7 BECOMING VARIABLE 4. ROUGH OR VERY ROUGH. SHOWERS,
RAIN LATER. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-08 Thread Zefram
Poul-Henning Kamp wrote:
>We certainly don't want to transmit the leap-second table with every
>single NTP packet, because, as a result, we would need to examine
>it every time to see if something changed.

Once we've got an up-to-date table, barring faults, we only need to check
to see whether the table has been extended further into the future.
If we put the expiry date first in the packet then that'll usually be
just a couple of machine instructions to know that there's no new data.

If an erroneous table is distributed, we want to pick up corrections
eventually, but we don't have to check every packet for that.  Not that
it would be awfully expensive to do so, anyway.

>Furthermore, you will not getaround a strong signature on the
>leap-second table, because if anyone can inject a leap-second table
>on the internet, there is no end to how much fun they could have.

This issue applies generally with time synchronisation, does it not?
NTP has authentication mechanisms.

-zefram


Re: Introduction of long term scheduling

2007-01-08 Thread Peter Bunclark
On Mon, 8 Jan 2007, Zefram wrote:
>
> Conciseness is useful for network protocols.  Bandwidth is increasingly
> the limiting factor: CPU speed and bulk storage sizes have been
> increasing faster.  An NTPv3 packet is only 48 octets of UDP payload;
> if a leap second table is to be disseminated in the same packets then
> we really do want to think about the format nybble by nybble.

Surely not; ntp worked fine back when LANs were dozens of machines on
shared 10Mbs ethernet and WANs were 64k if you were lucky. With
switched gigabit and broadband to homes we have orders of magnitude more
bandwidth than the early days.
Also, much of the overhead is in packet processing; while you keep
a message down to one packet, the transmission time really does go as the
raw wire speed.

I agree totally, data delivered with a timestamp should be concise, but
it shouldn't be obfuscated for the sake of an infinitesimal amount of
bandwidth.

Cheers
Peter.


Re: Introduction of long term scheduling

2007-01-08 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Zefram writes:
>Clive D.W. Feather wrote:
>>Firstly, I agree with Steve when he asks "why bother?". You're solving the
>>wrong problem.
>
>Conciseness is useful for network protocols.

On the other hand, one should not forget that the OSI protocols was
killed by conciseness to the point of obscurity.

And next thing, somebody is going to argue for GZIP encoding of the
list, and next thing you know, all programs need to drag libz in
to uncompress their leap-second table.

The major part of the InterNets success was that you could telnet
to pratically all servers, FTP, SMTP, NNTP etc, and you could see
what went on without a protocol analyzer with a price-tag of $CALL

>the limiting factor: CPU speed and bulk storage sizes have been
>increasing faster.  An NTPv3 packet is only 48 octets of UDP payload;
>if a leap second table is to be disseminated in the same packets then
>we really do want to think about the format nybble by nybble.

No we don't.

We certainly don't want to transmit the leap-second table with every
single NTP packet, because, as a result, we would need to examine
it every time to see if something changed.

Furthermore, you will not getaround a strong signature on the
leap-second table, because if anyone can inject a leap-second table
on the internet, there is no end to how much fun they could have.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-08 Thread Zefram
Clive D.W. Feather wrote:
>Firstly, I agree with Steve when he asks "why bother?". You're solving the
>wrong problem.

Conciseness is useful for network protocols.  Bandwidth is increasingly
the limiting factor: CPU speed and bulk storage sizes have been
increasing faster.  An NTPv3 packet is only 48 octets of UDP payload;
if a leap second table is to be disseminated in the same packets then
we really do want to think about the format nybble by nybble.

I'd like whatever format we use to be able to explicitly state its
starting point.  That way a long table can be split up into smaller
chunks, to fit a fixed-size field in synchronisation packets.  Time
servers could send out a randomly-selected chunk of table with each
packet, so that clients pick up the complete table over time without
having to do anything special.

-zefram


Re: Introduction of long term scheduling

2007-01-08 Thread Clive D.W. Feather
Rob Seaman said:
> Which raises the question of how concisely one can express a leap
> second table.

Firstly, I agree with Steve when he asks "why bother?". You're solving the
wrong problem.

However, having said that:

> So, let's see - assume:
>1) all 20th century leap seconds can be statically linked
>2) start counting months at 2000-01-31
> We're seeing about 7 leapseconds per decade on average, round up to
> 10 to allow for a few decades worth of quadratic acceleration (less
> important for the next couple of centuries than geophysical noise).
> So 100 short integers should suffice for the next century and a
> kilobyte likely for the next 500 years.  Add one short for the
> expiration date, and a zero short word for an end of record stopper
> and distribute it as a variable length record - quite terse for the
> next few decades.  The current table would be six bytes (suggest
> network byte order):
>
>0042 003C 

That's far too verbose a format.

Firstly, once you've seen the value 003C, you know all subsequent values
will be greater. So why not delta encode them (i.e. each entry is the
number of months since the previous leap second)? If you assume that leap
seconds will be no more than 255 months apart, then you only need one byte
per leap second. But you don't even need that assumption: a value of 255
can mean 255 months without a leap second (I'm assuming we're reserving 0
for end-of-list).

But we can do better. At present leap seconds come at 6 month boundaries.
So let's encode using 4 bit codons:

* Start with the "unit size" being 6 months.
* A codon of 1 to 15 means the next leap second is N units after the
  previous one.
* A codon of 0 is followed by a second codon:
  - 1, 3, 6, or 12 sets the unit size;
  - 0 means the next item is the expiry date, after which the list ends
  (this assumes the expiry is after the last leap second; I wasn't
  clear if you expect that always to be the case);
  - 15 means 15 units without a leap second;
  - other values are reserved for future expansion.

So the present table is A001. Two bytes instead of six.

If we used 1980 as the base instead of 2000, the table would be:

3224 5423 2233 3E00 1x

where the last byte can have any value for the last 4 bits.

I'm sure that some real thought could compress the data even more; based on
leap second history, 3 byte codons would probably be better than 4.

--
Clive D.W. Feather  | Work:  <[EMAIL PROTECTED]>   | Tel:+44 20 8495 6138
Internet Expert | Home:  <[EMAIL PROTECTED]>  | Fax:+44 870 051 9937
Demon Internet  | WWW: http://www.davros.org | Mobile: +44 7973 377646
THUS plc||


Re: Introduction of long term scheduling

2007-01-07 Thread Zefram
Daniel R. Tobias wrote:
>Formulas for UTC, as actually defined at the time, go back to 1961
>here:

But that involves rubber seconds, which is quite a big complication to add
to your TAI<->UTC conversion.  If you're going to handle pre-1972 times,
I think you really need to decide what you'll do prior to 1961 (when
UTC doesn't exist at all) and prior to 1955 (when there is no atomic
timescale).  POSIX punts on this: prior to 1972 time_t is implicitly
tied to an unspecified variety of UT.  (I wrote a bit about this aspect
of time_t in the Wikipedia article [[Unix time]].)

To be rigorously correct, applications should have distinct
representations for TAI dates, UTC dates, and UT dates.  TAI dates can't
legitimately exist that precede the first atomic timescale, and UTC dates
can't legitimately exist that precede either 1961 or 1972 (depending on
which concept of UTC is in use).  UT dates can exist for any time back to
four or five gigayears ago.  TAI and UT dates can exist arbitrarily far
in the future; future UTC dates can be considered, but some have only a
tentative existence.  Conversions can't be done very far into the future.

How things would work out in a system that pervasively used this
rigorously correct model I'm not sure.  We've already discussed
the aspects relating to present and future times.  For distant past
times it'd be necessary to explicitly process timestamps in UT form.
Possibly TT could also be used in some form, for interval calculations
in the pre-caesium age.

Whatever timescales are used for pre-1955 dates are, of course, also
available for present and future dates.  Perhaps, in this system,
many applications dealing with distant past dates would just use TT and
vague-UT for present and future dates as well.  This sounds sensiblish
to me: sub-second contemporary timestamps and also pre-caesium timestamps
in the same application seems like a specialised requirement.

-zefram


Re: Introduction of long term scheduling

2007-01-07 Thread Tony Finch
On Sun, 7 Jan 2007, Daniel R. Tobias wrote:
>
> Formulas for UTC, as actually defined at the time, go back to 1961
> here:

You helpfully snipped the part where I said that it probably isn't
worth implementing rubber seconds.

> ftp://maia.usno.navy.mil/ser7/tai-utc.dat
> It appears the offset was 1.4228180 seconds at the start of this.

If you extend the initial period backwards to the start of 1958 then the
offset drops close to zero.

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
FITZROY: WEST BACKING SOUTHWEST 5 TO 7, OCCASIONALLY GALE 8. ROUGH OR VERY
ROUGH, OCCASIONALLY HIGH. SHOWERS, THEN RAIN. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-07 Thread Daniel R. Tobias
On 8 Jan 2007 at 0:15, Tony Finch wrote:

> How did you extend the UTC translation back past 1972 if the undelying
> clock followed TAI? I assume that beyond some point in the past you say
> that the clock times are a representation of UT. However TAI matched UT in
> 1958 and between then and 1972 you somehow have to deal with a 10s offset.

Formulas for UTC, as actually defined at the time, go back to 1961
here:

ftp://maia.usno.navy.mil/ser7/tai-utc.dat

It appears the offset was 1.4228180 seconds at the start of this.

--
== Dan ==
Dan's Mail Format Site: http://mailformat.dan.info/
Dan's Web Tips: http://webtips.dan.info/
Dan's Domain Site: http://domains.dan.info/


Re: Introduction of long term scheduling

2007-01-07 Thread Tony Finch
On Sun, 7 Jan 2007, M. Warner Losh wrote:
>
> Having tried it, there are a lot of little 33 second anomolies in many
> applications :-(.

How did you extend the UTC translation back past 1972 if the undelying
clock followed TAI? I assume that beyond some point in the past you say
that the clock times are a representation of UT. However TAI matched UT in
1958 and between then and 1972 you somehow have to deal with a 10s offset.
It would be over-engineering to implement rubber seconds for the whole
system when only very few applications need them. I suppose you could
invent a leap second schedule for the 1960s, but perhaps it's more
sensible to define the underlying timescale to be TAI+10 so that your
system makes the universal->atomic time split at the point when UTC was
established.

> I've toyed with the idea of running the kernel in TAI and having 'smart'
> processes tell libc they want no UTC translation and having all the
> TAI<->UTC translation happen in libc (also hacking those FS that want
> UTC time to be able to get it).

It would seem sensible to me to fix time_t's lack of range at the same
time as fixing its model of time. Perhaps the transition model to follow
is the one used on non-BSD systems for large file support, which allows
the developer to either set a compile time flag to get the new behaviour,
or use a wide form of the API, e.g. stat64() instead of stat().

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
ROCKALL: SOUTHWEST 6 BECOMING CYCLONIC 6 TO GALE 8, PERHAPS SEVERE GALE 9
LATER. VERY ROUGH BECOMING HIGH. SHOWERS, RAIN LATER. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-07 Thread Tony Finch
On Sun, 7 Jan 2007, M. Warner Losh wrote:
>
> [POSIX time] is designed to be UTC, but fails to properly implement
> UTC's leap seconds and intervals around leapseconds.

>From the historical point of view I'd say that UNIX time was originally
designed to be some vague form of UT, and the POSIX committee retro-fitted
a weak form of UTC synchronization.

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
DOGGER FISHER GERMAN BIGHT HUMBER: SOUTHWEST, VEERING NORTHWEST FOR A TIME, 6
TO GALE 8, OCCASIONALLY SEVERE GALE 9 IN DOGGER. ROUGH OR VERY ROUGH. RAIN OR
SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-07 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Rob Seaman <[EMAIL PROTECTED]> writes:
: What is correct is to have a 61 second minute occasionally, neither
: to redo the first second of the next day, nor to repeat the last
: second of the current day.

Unfortunately, that's not POSIX time_t.  And when you are implementing
unix kernel APIs, that is the only game in town, unless you invent
your own.  ntp_gettime comes close to implementing things right (on
FreeBSD the time state is synchronous to top of second), but even that
is a mess.

You can't do both 61 second minutes and also have the 'naive math'
that time_t requires.

I agree with others that have said that ntp should be TAI or GPS based
(eg a linear count of seconds in a timescale that doesn't have leap
seconds), and that leap second corrections should happen in userland
like is done for timezones.  That's really the only sane way to do
things.  Having tried it, there are a lot of little 33 second
anomolies in many applications :-(.  I've toyed with the idea of
running the kernel in TAI and having 'smart' processes tell libc they
want no UTC translation and having all the TAI<->UTC translation
happen in libc (also hacking those FS that want UTC time to be able to
get it).  Maybe I'll try it and see what breaks since the line between
applications and libraries can make a per-process flag hard to cope
with if your application links with a lot of libraries.

Warner


Re: Introduction of long term scheduling

2007-01-07 Thread Rob Seaman

Tony Finch wrote:


As http://www.eecis.udel.edu/~mills/leap.html shows, NTP (with kernel
support) is designed to stop the clock over the leap second, which I
don't call "correct". Without kernel support it behaves like a
"pinball
machine"  (according to Mills).


Warner Losh wrote:


It implements exactly what ntpd wants.  I asked Judah Levine when
determining what was pedantically correct during the leap second.


Well, I didn't say that that NTP, and Mills and Levine themselves,
currently form a timekeeping model self-consistent with UTC, rather I
was trying to suggest that POSIX wasn't the only game in town.  I've
made more successful rhetorical choices...


I also consulted with the many different resources avaialable to
deterimine what the right thing is.


What is correct is to have a 61 second minute occasionally, neither
to redo the first second of the next day, nor to repeat the last
second of the current day.

Presumably no one would object if the ITU made it easier to obtain a
copy of 460.4.

Rob


Re: Introduction of long term scheduling

2007-01-07 Thread Zefram
M. Warner Losh wrote:
> But ntp_gettime returns a timespec
>for the time, as well as a time_state for the current time status,
>which includes TIME_INS and TIME_DEL for psotive and negative leap
>second 'warning' for end of the day so you know there will be a leap
>today, and TIME_WAIT for the actual positive leap second itself
>(there's nothing for a negative leapsecond, obviously).

Actually the interface is more complicated than that.  TIME_WAIT indicates
that a leap second recently occurred, and continues to be returned
until the command bit that initiated the leap second has been cleared.
The status during a leap second is meant to be TIME_OOP.  TIME_OK is the
normal state.  If the clock is not properly synchronised then TIME_ERROR
(a.k.a. "TIME_ERR" or "TIME_BAD") is returned instead of the leap state:
the leap second engine still operates in this situation (at least it
does on Linux), but you don't get to see the state variable.

Mills's paper "A Kernel Model for Precision Timekeeping"
 says that the
time_t should repeat the last second of the day.  That would give you
these behaviours:

398 TIME_DEL 398 TIME_OK  398 TIME_INS
400 TIME_WAIT399 TIME_OK  399 TIME_INS
401 TIME_WAIT400 TIME_OK  399 TIME_OOP
402 TIME_WAIT401 TIME_OK  400 TIME_WAIT

Actually, though, the paper doesn't require the state change to be
atomic with the change of time_t.  It's allowed to be slightly delayed.
(It is in fact delayed a few milliseconds on Linux.)  So what is actually
seen is this:

398.5 TIME_DEL 398.5 TIME_OK  398.5 TIME_INS
399.0 TIME_DEL 399.0 TIME_OK  399.0 TIME_INS
400.5 TIME_WAIT399.5 TIME_OK  399.5 TIME_INS
401.0 TIME_WAIT400.0 TIME_OK  400.0 TIME_INS
401.5 TIME_WAIT400.5 TIME_OK  399.5 TIME_OOP
402.0 TIME_WAIT401.0 TIME_OK  400.0 TIME_OOP
402.5 TIME_WAIT401.5 TIME_OK  400.5 TIME_WAIT

There is enough information in there to fully decode it, but it means
looking at more states than would be required in the nominal version.
For example, if you see TIME_INS with a time that appears to be just
after midnight, then you're actually inside a positive leap second.
The second that time_t repeats is neither the second before midnight
[399, 400] nor the second after midnight [400, 401], but something
between those, encompassing midnight, approximately [399.005, 400.005].

Interestingly, once you've got all the decode logic required to handle
this, it's also possible to handle a system where time_t repeats the
first second of the next day.  This system wouldn't use the TIME_OOP
state at all, using TIME_INS to indicate the leap second.  It's just
the extreme end of where the repeated second could be placed.

I think this is a horrible mess and all the leap second adjustments should
happen in user space.  (Filesystems that use UTC-based timestamps would
require the kernel to be able to translate too, but this shouldn't affect
the clock or the APIs.)  NTP has problems because the leap second handling
was grafted on and it has no memory of leap seconds.  Synchronisation
and kernel APIs should use a plain linear count of TAI seconds, because
they're not concerned with time-of-day per se.  Anything that does care
about time-of-day can do the conversion itself, just like anything that
cares about day-of-week or day-of-year.  UTC is a calendar.

-zefram


Re: Introduction of long term scheduling

2007-01-07 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Rob Seaman <[EMAIL PROTECTED]> writes:
: > If by "some limp attempt" you mean "returns the correct time" then you
: > are correct.
:
: It's not the correct time under the current standard if the
: timekeeping model doesn't implement leap seconds correctly.  I don't
: think this is an impossible expectation, see http://
: www.eecis.udel.edu/~mills/exec.html, starting with the Levine and
: Mills PTTI paper.

It implements exactly what ntpd wants.  I asked Judah Levine when
determining what was pedantically correct during the leap second.  I
also consulted with the many different resources avaialable to
deterimine what the right thing is.  Of course, there are different
explaintions about what the leap second should look like depending on
if you listen to Dr. Levine or Dr Mills.  Dr. Mills web site says
'redo the first second of the next day' while Dr. Levine's
leapsecond.dat file says 'repeat the last second of the day.'
Actually, both of them hedge and say 'most systems implement...'  or
some variation on that theme.

It is possible to determine when you are in a leap second using ntp
extensions with their model.  Just not with POSIX interfaces.  The
POSIX interfaces are kludged, while the ntpd ones give you enough info
to know to print :59 or :60, but POSIX time_t is insufficiently
expressive, by itself, to know.  But ntp_gettime returns a timespec
for the time, as well as a time_state for the current time status,
which includes TIME_INS and TIME_DEL for psotive and negative leap
second 'warning' for end of the day so you know there will be a leap
today, and TIME_WAIT for the actual positive leap second itself
(there's nothing for a negative leapsecond, obviously).

So I stand by my "returns the correct time" statement.

Warner


Re: Introduction of long term scheduling

2007-01-07 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Tony Finch <[EMAIL PROTECTED]> writes:
: On Sat, 6 Jan 2007, M. Warner Losh wrote:
: >
: > Most filesystems store time as UTC anyway...
:
: POSIX time is not UTC :-)

True.  It is designed to be UTC, but fails to properly implement UTC's
leap seconds and intervals around leapseconds.

Warner


Re: Introduction of long term scheduling

2007-01-07 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, David Malone writes:

>FWIW, I believe most hospitals are more than capable of looking
>after equipment with complex maintenance schedules.

It is not just a questoin of ability, to a very high degree
cost is much more important.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-07 Thread Tony Finch
On Sun, 7 Jan 2007, Rob Seaman wrote:
>
> It's not the correct time under the current standard if the
> timekeeping model doesn't implement leap seconds correctly.  I don't
> think this is an impossible expectation, see http://
> www.eecis.udel.edu/~mills/exec.html, starting with the Levine and
> Mills PTTI paper.

As http://www.eecis.udel.edu/~mills/leap.html shows, NTP (with kernel
support) is designed to stop the clock over the leap second, which I
don't call "correct". Without kernel support it behaves like a "pinball
machine" (according to Mills).

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
SOUTHEAST ICELAND: CYCLONIC 6 TO GALE 8, BECOMING VARIABLE 4 FOR A TIME. ROUGH
OR VERY ROUGH. OCCASIONAL RAIN OR WINTRY SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-07 Thread Tony Finch
On Sat, 6 Jan 2007, M. Warner Losh wrote:
>
> Most filesystems store time as UTC anyway...

POSIX time is not UTC :-)

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
SOUTHEAST ICELAND: CYCLONIC 6 TO GALE 8, DECREASING 5 OR 6 LATER. ROUGH OR
VERY ROUGH. OCCASIONAL RAIN OR WINTRY SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-07 Thread David Malone
> So you think it is appropriate to demand that ever computer with a
> clock should suffer biannual software upgrades if it is not connected
> to a network where it can get NTP or similar service ?

> I know people who will disagree with you:

> Air traffic control
> Train control
> Hospitals

> and the list goes on.

FWIW, I believe most hospitals are more than capable of looking
after equipment with complex maintenance schedules. They have
endoscopes, blood gas analysers, gamma cameras, MRI machines,
dialysis machines and a rake of other stuff that has a schedule
requiring attention more regurally than once every 6 months.

I am not sure how much un-networked equipment that requires UTC to
1 second and doesn't already have a suitable maintenance schedule
exists in hospitals.

David.


Re: Introduction of long term scheduling

2007-01-07 Thread Rob Seaman

Warner Losh wrote:


Actually, every IP does not have a 1's complement checksum.  Sure,
there is a trivial one that covers the 20 bytes of header, but that's
it.  Most hardware these days off loads checksumming to the hardware
anyway to increase the throughput.  Maybe you are thinking of TCP or
UDP :-).  Often, the packets are copied and therefore in the cache, so
the addition operations are very cheap.


Ok.  I simplified.  There are several layers of checksums.  I
designed an ASCII encoded checksum for the astronomical FITS format
and should not have been so sloppy.  "They do it in hardware" could
be taken as an argument for how time should be handled, as well.


Adding or subtracting two of them is relatively easy.


Duly stipulated, your honor.


Converting to a broken down format or doing math
with the complicated forms is much more code intensive.


And should the kernel be expected to handle "complicated forms" of
any data structure?


Dealing with broken down forms, and all the special cases usually
involves
multiplcation and division, when tend to be more computationally
expensive than the checksum.


Indeed.  May well be.  I would suggest that the natural scope of this
discussion is the intrinsic requirements placed on the kernel, just
as it should be the intrinsic requirements of the properly traceable
distribution and appropriate usage of time-of-day and interval
times.  Current kernels (and other compute layers, services and
facilities) don't appear to implement a coherent model of
timekeeping.  Deprecating leap seconds is not a strategy for make the
model more coherent, rather, just the timekeeping equivalent of
Lysenkoism.


Having actually participated in the benchmarks that showed the effects
of inefficient timekeeping, I can say that they have a measurable
effect.  I'll try to find references that the benchmarks generated.


With zero irony intended, that would be thoroughly refreshing.


If by "some limp attempt" you mean "returns the correct time" then you
are correct.


It's not the correct time under the current standard if the
timekeeping model doesn't implement leap seconds correctly.  I don't
think this is an impossible expectation, see http://
www.eecis.udel.edu/~mills/exec.html, starting with the Levine and
Mills PTTI paper.


You'd think that, but you have to test to see if something was
pending.  And the code actually does that.


Does such testing involve the complex arithmetic you describe above?
(Not a rhetorical question.)  The kernel does a heck of a lot of
conditional comparisons every second.


Did I say anything about eviscerating mean solar time?


Well, these side discussions get a little messy.  The leap second
assassins haven't made any particular fuss about kernel computing
issues, either, just previous and next generation global positioning
and "certain spread spectrum applications" and the inchoate fear of
airplanes falling from the sky.

The probability of the latter occurring seems likely to increase a
few years after leap seconds are finally eradicated - after all,
airplanes follow great circles and might actually care to know the
orientation of the planet.  Hopefully, should such a change occur
courtesy of WP7A, all pilots, all airlines and all air traffic
control centers will get the memo and not make any sign errors in
implementing contingent patches.  It's the height of hubris to simply
assume all the problems vanish with those dastardly leap seconds.  (I
don't suppose the kernel currently has to perform spherical trig?)

Note that the noisy astronomer types on this list are all also
software types, we won't reject computing issues out of hand.


I'm just suggesting that some of the suggested ideas have real
performance issues that means they wouldn't even be considered as
viable options.


Real performance issues will be compelling evidence to all parties.
Real performance issues can be described with real data.


True, but timekeeping is one of those areas of the kernel that extra
overhead is called so many times that making it more complex hurts a
lot more than you'd naively think.


Either the overhead in question is intrinsic to the reality of
timekeeping - or it is not.  In the latter case, one might expect
that we could all agree that the kernel(s) in question are at fault,
or that POSIX is at fault.  I have little sympathy for the suggestion
that having established that POSIX or vendors are at fault that we
let them get away with it anyway.  Rather, workaround any limitations
in the mean time and redesign properly for the future.

If, however, the overhead is simply the cost of doing timekeeping
right, then I submit that it is better to do timekeeping right than
to do it wrong.  Doing it right certainly may involve appropriate
approximations.  Destroying mean solar time based civil time-of-day
is not appropriate.

Of course, we have yet to establish the extent of any problem with
such overhead.  It sounds like you have expertise in this area.
Asse

Re: Introduction of long term scheduling

2007-01-07 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, "M. Warner Losh" writes
:
>In message: <[EMAIL PROTECTED]>
>Tony Finch <[EMAIL PROTECTED]> writes:
>: On Sat, 6 Jan 2007, Ashley Yakeley wrote:
>: >
>: > Presumably it only needs to know the next leap-second to do this, not
>: > the whole known table?
>:
>: Kernels sometimes need to deal with historical timestamps (principally
>: from the filesystem) so it'll need a full table to be able to convert
>: between POSIX time and atomic time for compatibility purposes.
>
>Most filesystems store time as UTC anyway...

Actually, I tend to think these are in the minority, but most of
the non-UTC ones are of minor significance.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Rob Seaman <[EMAIL PROTECTED]> writes:
: Warner Losh wrote:
: > Anything that makes the math
: > harder (more computationally expensive) can have huge effects on
: > performance in these areas.  That's because the math is done so often
: > that any little change causes big headaches.
:
: Every IP packet has a 1's complement checksum.  (That not all
: switches handle these properly is a different issue.)

Actually, every IP does not have a 1's complement checksum.  Sure,
there is a trivial one that covers the 20 bytes of header, but that's
it.  Most hardware these days off loads checksumming to the hardware
anyway to increase the throughput.  Maybe you are thinking of TCP or
UDP :-).  Often, the packets are copied and therefore in the cache, so
the addition operations are very cheap.

: Calculating a
: checksum is about as expensive (or more so) than subtracting
: timestamps the right way.  I have a hard time believing that epoch<-
:  >interval conversions have to be performed more often than IP
: packets are assembled.

Benchmarks do not lie.  Also, you are misunderstanding the purpose of
timestamps in the kernel.  Adding or subtracting two of them is
relatively easy.  Converting to a broken down format or doing math
with the complicated forms is much more code intensive.  Dealing with
broken down forms, and all the special cases usually involves
multiplcation and division, when tend to be more computationally
expensive than the checksum.

: One imagines (would love to be pointed to
: actual literature regarding such issues) that most computer time
: handling devolves to requirements for relative intervals and epochs,
: not to stepping outside to any external clock at all.  Certainly the
: hardware clocking of signals is an issue entirely separate from what
: we've been discussing as "timekeeping" and "traceability".  (And note
: that astronomers face much more rigorous requirements in a number of
: ways when clocking out their CCDs.)

Having actually participated in the benchmarks that showed the effects
of inefficient timekeeping, I can say that they have a measurable
effect.  I'll try to find references that the benchmarks generated.

: > Well, the kernel doesn't expect to be able to do that.  Internally,
: > all the FreeBSD kernel does is time based on a monotonically
: > increasing second count since boot.  When time is returned, it is
: > adjusted to the right wall time.
:
: Well, no - the point is that only some limp attempt is made to adjust
: to the right time.

If by "some limp attempt" you mean "returns the correct time" then you
are correct.

: > The kernel only worries about leap
: > seconds when time is incremented, since the ntpd portion in the kernel
: > needs to return special things during the leap second.  If there were
: > no leapseconds, then even that computation could be eliminated.  One
: > might think that one could 'defer' this work to gettimeofday and
: > friends, but that turns out to not be possible (or at least it is much
: > more inefficient to do it there).
:
: One might imagine that an interface could be devised that would only
: carry the burden for a leap second when a leap second is actually
: pending.  Then it could be handled like any other rare phenomenon
: that has to be dealt with correctly - like context switching or
: swapping.

You'd think that, but you have to test to see if something was
pending.  And the code actually does that.

: > Really, it is a lot more complicated than just the 'simple' case
: > you've latched onto.
:
: Ditto for Earth orientation and its relation to civil timekeeping.
: I'm happy to admit that getting it right at the CPU level is
: complex.  Shouldn't we be focusing on that, rather than on
: eviscerating mean solar time?

Did I say anything about eviscerating mean solar time?

: A proposal to actually address the intrinsic complications of
: timekeeping is more likely to be received warmly than is a kludge or
: partial workaround.  I suspect it would be a lot more fun, too.

I'm just suggesting that some of the suggested ideas have real
performance issues that means they wouldn't even be considered as
viable options.

: > Kernels aren't written in these languages.  To base one's arugments
: > about what the right type for time is that is predicated on these
: > langauges is a non-starter.
:
: No, but the kernels can implement support for these types and the
: applications can code to them in whatever language.  Again - there is
: a hell of a lot more complicated stuff going on under the hood than
: what would be required to implement a proper model of timekeeping.

True, but timekeeping is one of those areas of the kernel that extra
overhead is called so many times that making it more complex hurts a
lot more than you'd naively think.

Warner


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Tony Finch <[EMAIL PROTECTED]> writes:
: On Sat, 6 Jan 2007, Ashley Yakeley wrote:
: >
: > Presumably it only needs to know the next leap-second to do this, not
: > the whole known table?
:
: Kernels sometimes need to deal with historical timestamps (principally
: from the filesystem) so it'll need a full table to be able to convert
: between POSIX time and atomic time for compatibility purposes.

Most filesystems store time as UTC anyway...

And that's one reason that kernels do a lot of timestamp operations:
whenever a file is touched, one has to update the time it was last
touched.  Then, when the file is statted, that time must be returned.
That makes it very hard to do all in libc because of different boot
times or nfs, etc.  While one could do the degenerate case of
gettimeofday in libc, these other cases are much harder.

Warner


Re: Introduction of long term scheduling

2007-01-06 Thread Tony Finch
On Sat, 6 Jan 2007, Ashley Yakeley wrote:
>
> Presumably it only needs to know the next leap-second to do this, not
> the whole known table?

Kernels sometimes need to deal with historical timestamps (principally
from the filesystem) so it'll need a full table to be able to convert
between POSIX time and atomic time for compatibility purposes.

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
SHANNON ROCKALL MALIN: MAINLY WEST OR SOUTHWEST 6 TO GALE 8, OCCASIONALLY
SEVERE GALE 9. VERY ROUGH OR HIGH. RAIN OR SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-06 Thread Rob Seaman

Poul-Henning Kamp wrote:


there are only two classes of solutions.


Fix it or ignore it?


It's not a matter of clock precision or clock stability.  It's only
a matter of how they count.


That will be news to Dave Mills.


a state of denial with respect to a particular lump of rocks
ability as timekeeper


Quite right, too - except that we happen to live on this particular
rock.


I know people who will disagree with you:

Air traffic control
Train control
Hospitals


These are all environments in which developers are familiar with
formal requirements and project management.  Are we to suppose that a
floppy statement of "trust us, it will all be all right" is going to
be sufficient?  Eviscerating mean solar time is as likely to decrease
safety as improve it.  How about a coherent risk analysis?  One would
have thought that the lessons of Y2K would have settled in more deeply.

Rob


Re: Introduction of long term scheduling

2007-01-06 Thread John Cowan
M. Warner Losh scripsit:

> Since the interface to the kernel is time_t, there's really no chance
> for the kernel to do anything smarter with leapseconds.  gettimeofday,
> time and clock_gettime all return a time_t in different flavors.

It could be done in the C library, since the interface between the
kernel and libc is not defined, only the interface between libc and
userland programs proper.

> Kernels aren't written in these languages.

They don't have to be: the strong typing can be imposed by
convention.  ISO C got this right: a time_t can be any numeric
type, and difftime is used to find the seconds between two time_t's.
POSIX decided to stick with the old count-of-seconds rules for
arithmetic purposes, while making time_t no longer an actual count
of seconds, as V7 Unix defined it to be.

--
John Cowanhttp://ccil.org/~cowan[EMAIL PROTECTED]
Mr. Henry James writes fiction as if it were a painful duty.  --Oscar Wilde


Re: Introduction of long term scheduling

2007-01-06 Thread Rob Seaman

Warner Losh wrote:


Anything that makes the math
harder (more computationally expensive) can have huge effects on
performance in these areas.  That's because the math is done so often
that any little change causes big headaches.


Every IP packet has a 1's complement checksum.  (That not all
switches handle these properly is a different issue.)  Calculating a
checksum is about as expensive (or more so) than subtracting
timestamps the right way.  I have a hard time believing that epoch<-
>interval conversions have to be performed more often than IP
packets are assembled.  One imagines (would love to be pointed to
actual literature regarding such issues) that most computer time
handling devolves to requirements for relative intervals and epochs,
not to stepping outside to any external clock at all.  Certainly the
hardware clocking of signals is an issue entirely separate from what
we've been discussing as "timekeeping" and "traceability".  (And note
that astronomers face much more rigorous requirements in a number of
ways when clocking out their CCDs.)


Well, the kernel doesn't expect to be able to do that.  Internally,
all the FreeBSD kernel does is time based on a monotonically
increasing second count since boot.  When time is returned, it is
adjusted to the right wall time.


Well, no - the point is that only some limp attempt is made to adjust
to the right time.


The kernel only worries about leap
seconds when time is incremented, since the ntpd portion in the kernel
needs to return special things during the leap second.  If there were
no leapseconds, then even that computation could be eliminated.  One
might think that one could 'defer' this work to gettimeofday and
friends, but that turns out to not be possible (or at least it is much
more inefficient to do it there).


One might imagine that an interface could be devised that would only
carry the burden for a leap second when a leap second is actually
pending.  Then it could be handled like any other rare phenomenon
that has to be dealt with correctly - like context switching or
swapping.


Really, it is a lot more complicated than just the 'simple' case
you've latched onto.


Ditto for Earth orientation and its relation to civil timekeeping.
I'm happy to admit that getting it right at the CPU level is
complex.  Shouldn't we be focusing on that, rather than on
eviscerating mean solar time?  In general, either "side" here would
have a better chance of convincing the other if actual proposals,
planning, research, requirements, and so forth, were discussed.  The
only proposal on the table - and the only one I spend every single
message trying to shoot down - is the absolutely ridiculous leap hour
proposal.  We're not defending leap seconds per se - we're defending
mean solar time.

A proposal to actually address the intrinsic complications of
timekeeping is more likely to be received warmly than is a kludge or
partial workaround.  I suspect it would be a lot more fun, too.


Kernels aren't written in these languages.  To base one's arugments
about what the right type for time is that is predicated on these
langauges is a non-starter.


No, but the kernels can implement support for these types and the
applications can code to them in whatever language.  Again - there is
a hell of a lot more complicated stuff going on under the hood than
what would be required to implement a proper model of timekeeping.

Rob


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Ashley Yakeley <[EMAIL PROTECTED]> writes:
: On Jan 6, 2007, at 16:18, M. Warner Losh wrote:
:
: > Unfortunately, the kernel has to have a notion of time stepping around
: > a leap-second if it implements ntp.  There's no way around that that
: > isn't horribly expensive or difficult to code.  The reasons for the
: > kernel's need to know have been enumerated elsewhere...
:
: Presumably it only needs to know the next leap-second to do this, not
: the whole known table?

Yes.  ntpd or another agent tells it when leap seconds are coming.  It
doesn't need a table.  Then again, none of the broadcast time services
provide a table...

Warner


Re: Introduction of long term scheduling

2007-01-06 Thread Ashley Yakeley

On Jan 6, 2007, at 16:18, M. Warner Losh wrote:


Unfortunately, the kernel has to have a notion of time stepping around
a leap-second if it implements ntp.  There's no way around that that
isn't horribly expensive or difficult to code.  The reasons for the
kernel's need to know have been enumerated elsewhere...


Presumably it only needs to know the next leap-second to do this, not
the whole known table?

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-06 Thread Zefram
Poul-Henning Kamp wrote:
>So you think it is appropriate to demand that ever computer with a
>clock should suffer biannual software upgrades if it is not connected
>to a network where it can get NTP or similar service ?

If it's not connected to the network, how is it keeping its clock
synchronised?  Suppose that every source of synchronisation also provided
the leap second table.  Then the only clocks lacking the table would
be ones that weren't synchronised anyway.  A fallback behaviour that
involves not being able to do precise TAI<->UTC conversions might well
be acceptable if the clock itself isn't precise.

The upcoming chip-scale atomic clocks will change this, of course.
They'll allow a disconnected system to maintain precise interval time
over many years.

-zefram


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Ashley Yakeley <[EMAIL PROTECTED]> writes:
: On Jan 6, 2007, at 08:35, M. Warner Losh wrote:
:
: > So for the foreseeable future,
: > timestamps in OSes will be a count of seconds and a fractional second
: > part.  That's not going to change anytime soon even with faster
: > machines, more memory, etc.  Too many transaction processing
: > applications demand maximum speed.
:
: That's sensible for a simple timestamp, but trying to squeeze in a
: leap-second table probably isn't such a good idea.

Unfortunately, the kernel has to have a notion of time stepping around
a leap-second if it implements ntp.  There's no way around that that
isn't horribly expensive or difficult to code.  The reasons for the
kernel's need to know have been enumerated elsewhere...

Warner


Re: Introduction of long term scheduling

2007-01-06 Thread Greg Hennessy

Poul-Henning Kamp wrote:

So you think it is appropriate to demand that ever computer with a
clock should suffer biannual software upgrades if it is not connected
to a network where it can get NTP or similar service ?


Well, I doubt very much that every computer cares. For the small subset
of machines that do care, I think NTP or a GPS receiver solves the
majority of the problem.


Re: Introduction of long term scheduling

2007-01-06 Thread Ashley Yakeley

On Jan 6, 2007, at 14:43, Poul-Henning Kamp wrote:


So you think it is appropriate to demand that ever computer with a
clock should suffer biannual software upgrades if it is not connected
to a network where it can get NTP or similar service ?


Since that's the consequence of hard-coding a leap-second table,
that's exactly what I'm not proposing. Instead, they should suffer
biannual updates to their leap-second table. Doing this is an
engineering problem, but a known one.

Under your plan B, however, we'd have plenty of software that just
wouldn't get upgraded at all, but would simply fail after ten years.
That strikes me as worse.


I know people who will disagree with you:


I don't think you're serious.


Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe


Don't forget " | one second off since 2018". :-)

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-06 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Ashley Yakeley
writes:

>Not necessarily. After seven months, or even after two years, there's
>a better chance that the product is still in active maintenance.
>Better to find that particular bug early, if someone's been so
>foolish as to hard-code a leap-second table. The bug here, by the
>way, is not that one particular leap second table is wrong. It's the
>assumption that any fixed table can ever be correct.

So you think it is appropriate to demand that ever computer with a
clock should suffer biannual software upgrades if it is not connected
to a network where it can get NTP or similar service ?

I know people who will disagree with you:

Air traffic control
Train control
Hospitals

and the list goes on.

6 months is simply not an acceptable warning to get, end of story.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread Ashley Yakeley

On Jan 6, 2007, at 13:47, Poul-Henning Kamp wrote:


In message <[EMAIL PROTECTED]>,
Ashley Yakeley
writes:

On Jan 6, 2007, at 11:36, Poul-Henning Kamp wrote:


B. i) Issue leapseconds with at least twenty times longer
notice.


This plan might not be so good from a software engineering point of
view. Inevitably software authors would hard-code the known table,
and then the software would fail ten years later with the first
unexpected leap second.


Ten years later is a heck of a log more acceptable than 7 months
later.


Not necessarily. After seven months, or even after two years, there's
a better chance that the product is still in active maintenance.
Better to find that particular bug early, if someone's been so
foolish as to hard-code a leap-second table. The bug here, by the
way, is not that one particular leap second table is wrong. It's the
assumption that any fixed table can ever be correct.

If you were to make that assumption in your code, then your product
would be defective if it's ever used ten years from now (under your
plan B). Programs in general tend to be used for awhile. Is any of
your software from 1996 or before still in use? I should hope so.

Under the present system, however, it's a lot more obvious that a
hard-coded leap second table is a bad idea.

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-06 Thread Tony Finch
On Sat, 6 Jan 2007, Steve Allen wrote:
>
> No two clocks can ever stay in agreement.

I don't think that statement is useful. Most people have a concept of
accuracy within certain tolerances, dependent on the quality of the clock
and its discipline mechanisms. For most purposes a computer's clock can be
kept correct with more than enough accuracy, and certainly enough accuracy
that leap seconds are noticeable.

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
HEBRIDES BAILEY FAIR ISLE FAEROES: SOUTHWEST 6 TO GALE 8. VERY ROUGH OR HIGH.
RAIN OR SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-06 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Ashley Yakeley
writes:
>On Jan 6, 2007, at 11:36, Poul-Henning Kamp wrote:
>
>> B. i) Issue leapseconds with at least twenty times longer
>> notice.
>
>This plan might not be so good from a software engineering point of
>view. Inevitably software authors would hard-code the known table,
>and then the software would fail ten years later with the first
>unexpected leap second.

Ten years later is a heck of a log more acceptable than 7 months later.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread Ashley Yakeley

On Jan 6, 2007, at 08:35, M. Warner Losh wrote:


So for the foreseeable future,
timestamps in OSes will be a count of seconds and a fractional second
part.  That's not going to change anytime soon even with faster
machines, more memory, etc.  Too many transaction processing
applications demand maximum speed.


That's sensible for a simple timestamp, but trying to squeeze in a
leap-second table probably isn't such a good idea.

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-06 Thread Ashley Yakeley

On Jan 6, 2007, at 11:36, Poul-Henning Kamp wrote:


B. i) Issue leapseconds with at least twenty times longer
notice.


This plan might not be so good from a software engineering point of
view. Inevitably software authors would hard-code the known table,
and then the software would fail ten years later with the first
unexpected leap second.

At least with the present system, programmers are (more) forced to
face the reality of the unpredictability of the time-scale.

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-06 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Steve Allen writes:
>On Sat 2007-01-06T19:36:19 +, Poul-Henning Kamp hath writ:
>> There are two problems:
>>
>> 1. We get too short notice about leap-seconds.
>>
>> 2. POSIX and other standards cannot invent their UTC timescales.
>
>This is not fair, for there is a more fundamental problem:

Yes, this is perfectly fair, this is all the problems there are.

And furthermore, the two plans I outlined represent the only
two kinds of plans there are for solving this.

They can be varied for various sundry and unsundry purposes, such
as the "leap-hour" fig-leaf and similar, but there are only
two classes of solutions.

>No two clocks can ever stay in agreement.

This is not relevant.  It's not a matter of clock precision or
clock stability.  It's only a matter of how they count.

>Yes, there is a cost of doing time right, and leap seconds are not to
>blame for that cost.  They are a wake up call from the state of denial.

Now, it can be equally argued, that leap seconds implement a state
of denial with respect to a particular lump of rocks ability as
timekeeper, so I suggest we keep that part of the discussion closed
for now.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread Steve Allen
On Sat 2007-01-06T19:36:19 +, Poul-Henning Kamp hath writ:
> There are two problems:
>
> 1. We get too short notice about leap-seconds.
>
> 2. POSIX and other standards cannot invent their UTC timescales.

This is not fair, for there is a more fundamental problem:

No two clocks can ever stay in agreement.

And the question that POSIX time_t does not answer is:

What do you want to do about that?

In some applications, especially the one for which it was designed,
there is nothing wrong with POSIX time_t.  POSIX is just fine to
describe a clock which is manually reset as necessary to stay within
tolerance.

There are now other applications.
For some of those POSIX cannot do the job -- with or without leap seconds.

Yes, there is a cost of doing time right, and leap seconds are not to
blame for that cost.  They are a wake up call from the state of denial.

--
Steve Allen <[EMAIL PROTECTED]>WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-06 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Rob Seaman writes:
>Warner Losh wrote:
>
>> leap seconds break that rule if one does things in UTC such that
>> the naive math just works
>
>POSIX time handling just sucks for no good reason.

I've said it before, and I'll say it again:

There are two problems:

1. We get too short notice about leap-seconds.

2. POSIX and other standards cannot invent their UTC timescales.

These two problems can be solved according to two plans:

A. Abolish leap seconds.

B. i) Issue leapseconds with at least twenty times longer notice.
   ii) Ammend POSIX and/or ISO-C
   iii) Ammend NTP
   iv) Ammend NTP
   v) Convince all operating system to adobt the new API
   vi) Fix all the bugs in their implementations
   vii) Fix up all the relevant application code
   viii) Fix all tacit the assumptions about time_t.

I will fully agree, that while taking the much easier approach of
plan A, will vindicate the potheads who wrote the time_t definition,
and thus deprive us of a very satisfactory intelectual reward of
striking their handiwork from the standards, it would cost only a
fraction of plan B.


Poul-Henning

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Rob Seaman <[EMAIL PROTECTED]> writes:
: Warner Losh wrote:
:
: > leap seconds break that rule if one does things in UTC such that
: > the naive math just works
:
: All civil timekeeping, and most precision timekeeping, requires only
: pretty naive math.  Whatever the problem is - or is not - with leap
: seconds, it isn't the arithmetic involved.  Take a look a [EMAIL PROTECTED]
: and other BOINC projects.  Modern computers have firepower to burn in
: fluff like live 3-D screensavers.  POSIX time handling just sucks for
: no good reason.  Other system interfaces successfully implement
: significantly more stringent facilities.

But modern servers and routers don't.  Anything that makes the math
harder (more computationally expensive) can have huge effects on
performance in these areas.  That's because the math is done so often
that any little change causes big headaches.

: Expecting to be able to "naively" subtract timestamps to compute an
: accurate interval reminds me of expecting to be able to naively stuff
: pointers into integer datatypes and have nothing ever go wrong.

Well, the kernel doesn't expect to be able to do that.  Internally,
all the FreeBSD kernel does is time based on a monotonically
increasing second count since boot.  When time is returned, it is
adjusted to the right wall time.  The kernel only worries about leap
seconds when time is incremented, since the ntpd portion in the kernel
needs to return special things during the leap second.  If there were
no leapseconds, then even that computation could be eliminated.  One
might think that one could 'defer' this work to gettimeofday and
friends, but that turns out to not be possible (or at least it is much
more inefficient to do it there).

Since the interface to the kernel is time_t, there's really no chance
for the kernel to do anything smarter with leapseconds.  gettimeofday,
time and clock_gettime all return a time_t in different flavors.

In short, you are taking things out of context and drawing the wrong
conclusion about what is done.  It is these complications, which I've
had to deal with over the past 7 years, that have lead me to the
understanding of the complications.  Espeically the 'non-uniform radix
crap' that's in UTC.  It really does complicate things in a number of
places that you wouldn't think.  To dimissively suggest it is only a
problem when subtracting two numbers to get an interval time is to
completely misunderstand the complications that leapseconds introduce
into systems and the unexpected places where they pop up.  Really, it
is a lot more complicated than just the 'simple' case you've latched
onto.

: A
: strongly typed language might even overload the subtraction of UTC
: typed variables with the correct time-of-day to interval
: calculations.

Kernels aren't written in these languages.  To base one's arugments
about what the right type for time is that is predicated on these
langauges is a non-starter.

: But then, what should one expect the subtraction of
: Earth orientation values to return but some sort of angle, not an
: interval?

These are a specialized thing that kernels don't care about.

Warner


Re: Introduction of long term scheduling

2007-01-06 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Tony Fin
ch writes:
>On Sat, 6 Jan 2007, M. Warner Losh wrote:
>>
>> OSes usually deal with timestamps all the time for various things.  To
>> find out how much CPU to bill a process, to more mondane things.
>> Having to do all these gymnastics is going to hurt performance.
>
>That's why leap second handling should be done in userland as part of the
>conversion from clock (scalar) time to civil (broken-down) time.

I would agree with you in theory, but badly designed filesystems
like FAT store timestamps in encoded YMDHMS format, so the kernel
need to know the trick as well. (There are other examples, but not
as well known).

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, "M. Warner Losh" writes:
>In message: <[EMAIL PROTECTED]>
>Ashley Yakeley <[EMAIL PROTECTED]> writes:

>OSes usually deal with timestamps all the time for various things.  To
>find out how much CPU to bill a process, to more mondane things.
>Having to do all these gymnastics is going to hurt performance.  One
>might scoff at this statement, but research into performance problems
>and issues has found time and again timekeeping and timestamps to have
>a surprisingly large impact.  So for the foreseeable future,
>timestamps in OSes will be a count of seconds and a fractional second
>part.  That's not going to change anytime soon even with faster
>machines, more memory, etc.  Too many transaction processing
>applications demand maximum speed.

I will agree with Warner here, but I will add the footnote that
since silicon pushers seem to be at a loss for how to gainfully
employ silicon these days, we are not particular insistent on any
particular aspect of the timestamps, apart from them being cheap
to get, add, subtract and compare.

If the silicon designers want to build in support for
MMDDHHMMSS.mmmuuunnnppp
BCD encoded timestamps, as long as they provide us with cheap
instructions to carry out the above operations, we're happy.

I should caution any hopes however, by mentioning that at this
time I have yet to see any CPU design getting a binary counter
running at a predictable rate right in first try.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-06 Thread Rob Seaman

Warner Losh wrote:


leap seconds break that rule if one does things in UTC such that
the naive math just works


All civil timekeeping, and most precision timekeeping, requires only
pretty naive math.  Whatever the problem is - or is not - with leap
seconds, it isn't the arithmetic involved.  Take a look a [EMAIL PROTECTED]
and other BOINC projects.  Modern computers have firepower to burn in
fluff like live 3-D screensavers.  POSIX time handling just sucks for
no good reason.  Other system interfaces successfully implement
significantly more stringent facilities.

Expecting to be able to "naively" subtract timestamps to compute an
accurate interval reminds me of expecting to be able to naively stuff
pointers into integer datatypes and have nothing ever go wrong.  A
strongly typed language might even overload the subtraction of UTC
typed variables with the correct time-of-day to interval
calculations.  But then, what should one expect the subtraction of
Earth orientation values to return but some sort of angle, not an
interval?

Rob


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Tony Finch <[EMAIL PROTECTED]> writes:
: On Sat, 6 Jan 2007, M. Warner Losh wrote:
: >
: > OSes usually deal with timestamps all the time for various things.  To
: > find out how much CPU to bill a process, to more mondane things.
: > Having to do all these gymnastics is going to hurt performance.
:
: That's why leap second handling should be done in userland as part of the
: conversion from clock (scalar) time to civil (broken-down) time.

Right.  And that's what makes things hard because the kernel time
clock needs to be monotonic, and leap seconds break that rule if one
does things in UTC such that the naive math just works (aka POSIX
time_t).  Some systems punt on keeping posix time internally, but have
complications for getting leapseconds right for times they return to
userland

Warner


Re: Introduction of long term scheduling

2007-01-06 Thread Tony Finch
On Sat, 6 Jan 2007, M. Warner Losh wrote:
>
> OSes usually deal with timestamps all the time for various things.  To
> find out how much CPU to bill a process, to more mondane things.
> Having to do all these gymnastics is going to hurt performance.

That's why leap second handling should be done in userland as part of the
conversion from clock (scalar) time to civil (broken-down) time.

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
SOUTHEAST ICELAND: SOUTHWEST BECOMING CYCLONIC 5 TO 7, PERHAPS GALE 8 LATER.
ROUGH TO HIGH. SQUALLY SHOWERS. MAINLY GOOD.


Re: Introduction of long term scheduling

2007-01-06 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
Ashley Yakeley <[EMAIL PROTECTED]> writes:
: On Jan 5, 2007, at 20:14, Rob Seaman wrote:
:
: > An ISO string is really overkill, MJD can fit into
: > an unsigned short for the next few decades
:
: This isn't really a good idea. Most data formats have been moving
: away from the compact towards more verbose, from binary to text to
: XML. There are good reliability and extensibility reasons for this,
: such as avoiding bit-significance order issues and the ability to
: sanity-check it just by looking at it textually.

While all these date formats are mildly interesting, they are far too
inefficient for implementation by an OS.

OSes usually deal with timestamps all the time for various things.  To
find out how much CPU to bill a process, to more mondane things.
Having to do all these gymnastics is going to hurt performance.  One
might scoff at this statement, but research into performance problems
and issues has found time and again timekeeping and timestamps to have
a surprisingly large impact.  So for the foreseeable future,
timestamps in OSes will be a count of seconds and a fractional second
part.  That's not going to change anytime soon even with faster
machines, more memory, etc.  Too many transaction processing
applications demand maximum speed.

: As the author of a library that consumes leap-second tables, my ideal
: format would look something like this: a text file with first line
: for MJD of expiration date, and each subsequent line with the MJD of
: the start of the offset period, a tab, and then the UTC-TAI seconds
: difference.

This sounds like a trivial variation on the NIST format for leapsecond
data.

Warner


Re: Introduction of long term scheduling

2007-01-05 Thread Rob Seaman

Ashley Yakeley wrote:


As the author of a library that consumes leap-second tables, my ideal
format would look something like this: a text file with first line
for MJD of expiration date, and each subsequent line with the MJD of
the start of the offset period, a tab, and then the UTC-TAI seconds
difference.


As an author (and good gawd, an editor) of an XML standard and schema
to convey transient astronomical event alerts - including potentially
leap seconds - I'd have to presume that XML would do the trick.

The thread was a discussion of appending enough context to an
individual timestamp to avoid the need for providing historical leap
seconds table updates at all.  Someone else pointed out that this
didn't preserve the historical record.  I wanted to additionally
point out that the cost of appending the entire leap second table to
every timestamp would itself remain quite minimal for many years, and
further, that even getting rid of leap seconds doesn't remove the
requirement for conveying information equivalent to this table (on
some cadence to some precision).

The complications are inherent in the distinction between time-of-day
(Earth orientation) and interval time.  The intrinsic cost of
properly supporting both types of time is quite minimal.

Rob


Re: Introduction of long term scheduling

2007-01-05 Thread Ashley Yakeley

On Jan 5, 2007, at 20:14, Rob Seaman wrote:


An ISO string is really overkill, MJD can fit into
an unsigned short for the next few decades


This isn't really a good idea. Most data formats have been moving
away from the compact towards more verbose, from binary to text to
XML. There are good reliability and extensibility reasons for this,
such as avoiding bit-significance order issues and the ability to
sanity-check it just by looking at it textually.

As the author of a library that consumes leap-second tables, my ideal
format would look something like this: a text file with first line
for MJD of expiration date, and each subsequent line with the MJD of
the start of the offset period, a tab, and then the UTC-TAI seconds
difference. That said, my notion of UTC is restricted to the step-
wise bit after 1972, and others might want more information.

--
Ashley Yakeley


Re: Introduction of long term scheduling

2007-01-05 Thread Steve Allen
On Fri 2007-01-05T21:14:19 -0700, Rob Seaman hath writ:
> Which raises the question of how concisely one can express a leap
> second table.

Gosh, Rob, I remember toggling in the boot program and starting
up the paper tape reader or the 12-inch floppy disc drive, but now
I'm not really sure I understand the need for compactness except in
formats which are specific to devices with very limited capacity.
I routinely carry around 21 GB of rewriteable storage.  It's
hard to imagine that the current generation of GPS receivers
has less than 100 MB and I expect that by the time Galileo is
flying it will be routine for handheld devices to have GB.

I would much prefer to see the IERS produce a rather verbose,
self-describing (to a machine), and extensible set of data products.
Devices which prefer a more compact version are free to compile the
full form into something suitable and specific to their limited needs.
Most devices will be satisfied with only the leap second table.

A leap second table in a working format is just one form of the
"navigator's log" containing information for the conversion of the
ship's chronometer to and from other, more universal time scales.
Leap seconds are step functions, but in general the chronometer
offsets are likely to be splines of higher order.
That's something which might benefit from having a well-defined
API and a number of examples of code which uses the information
to varying degrees of accuracy.

Some devices will never have clocks guaranteed to be set to within a
second of real time, and for that purpose the POSIX time_t API is
just dandy.  Other applications with access to other time sources
will want to use algorithms of more sophistication according to
their individual needs.

--
Steve Allen <[EMAIL PROTECTED]>WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-05 Thread Rob Seaman

Tony Finch wrote:


you need to be able to manipulate representations of times other
than the present, so you need a full leap second table.


Which raises the question of how concisely one can express a leap
second table.  Leap second tables are simply a list of dates - in ISO
8601 or MJD formats, for example.  Additionally you need an
expiration date.  An ISO string is really overkill, MJD can fit into
an unsigned short for the next few decades - but this is really more
than you need for the current standard since not all MJDs are
permitted, only once per month.  Also, we don't need to express leap
seconds that are already known (or never existed), so there is a
useless bias of ~54000 days.  If we start counting months now, a
short integer will suffice to encode each leap second for the next
5000+ years - certainly past the point when monthly scheduling will
no longer suffice.

So, let's see - assume:

   1) all 20th century leap seconds can be statically linked
   2) start counting months at 2000-01-31

We're seeing about 7 leapseconds per decade on average, round up to
10 to allow for a few decades worth of quadratic acceleration (less
important for the next couple of centuries than geophysical noise).
So 100 short integers should suffice for the next century and a
kilobyte likely for the next 500 years.  Add one short for the
expiration date, and a zero short word for an end of record stopper
and distribute it as a variable length record - quite terse for the
next few decades.  The current table would be six bytes (suggest
network byte order):

   0042 003C 

A particular application only needs to read the first few entries it
doesn't already have cached - scan backwards through the list just
until you pass the previous expiration date.  Could elaborate with a
checksum, certificate based signature or other provenance - but these
apply whatever the representation.

To emphasize a recent point:  DUT1 is currently negligible for many
applications.  Which is the same thing as saying that the simple
table of quantized leap seconds is quite sufficient for civil
purposes.  The effect of the ALHP is to inflate the importance of
DUT1 - not just for "professional" purposes, but for some list of
civil purposes that have yet to be inventoried, e.g., tide tables,
weather forecasts, pointing satellite dishes, aligning sundials (see
article in the Jan 2007 Smithsonian), navigation, aviation, amateur
astronomy, whatever.  I'm not arguing here that these are
intrinsically sufficient to justify retaining leap seconds (although
I believe this to be the case).  Rather, I'm arguing that even under
a "caves of steel" scenario of Homo sapiens inter-breeding with
Condylura cristata, that there will be applications that require a
explicit DUT1 correction - applications that currently can ignore
this step since UTC is guaranteed to remain within 0.9s of GMT.

So the current requirement is merely to convey a few extra bytes of
state with a six month update cadence.  This suffices to tie civil
epochs (and a useful approximation of Earth orientation) to civil
intervals.

The requirement in the post-leap-second Mad Max future, however,
would be to convey some similar data structure representing a table
of DUT1 tie points accurate to some level of precision with some as-
yet-unspecified cadencing requirement.  The most natural way to
express this might be the nearest round month to when each integral
step in DUT1 occurs, but it should be clear that the requirement for
maintaining and conveying a table of leap seconds is not eliminated,
but rather transmogrified into a similar requirement to maintain and
convey a table of DUT1 values.

Rob Seaman
NOAO


Re: Introduction of long term scheduling

2007-01-05 Thread Tony Finch
On Thu, 4 Jan 2007, Michael Deckers wrote:
>
>This leads me to my question: would it be helpful for POSIX implementors
>if each and every UTC timestamp came with the corresponding value of DTAI
>attached (instead of DUT1)? Would this even obviate the need for a leap
>seconds table?

No, because you need to be able to manipulate representations of times
other than the present, so you need a full leap second table. You might as
well distribute it with the time zone database because both are used by
the same component of the system and the leap second table changes more
slowly than the time zone database.

You don't need to transmit TAI-UTC with every timestamp: for example, NTP
and GPS transmit UTC offset tables and updates comparatively infrequently.

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
WIGHT PORTLAND PLYMOUTH: WEST 4 OR 5, BECOMING CYCLONIC 5 TO 7 FOR A TIME,
THEN NORTHWEST 5 OR 6 LATER. MODERATE OCCASIONALLY ROUGH IN PORTLAND AND
PLYMOUTH. OCCASIONAL RAIN OR DRIZZLE. GOOD OCCASIONALLY MODERATE OR POOR.


Re: Introduction of long term scheduling

2007-01-04 Thread Michael Deckers
   On 2007-01-03, Poul-Henning Kamp commented on Bulletin D 94:

>  That's an interesting piece of data in our endless discussions about
>  how important DUT1 really is...

   So it appears that DUT1, an approximation of UT1 - UTC, is not of much use,
   even though it is disseminated with many time signals. On the other hand,
   POSIX implementors need the values of DTAI = TAI - UTC, the count of leap
   seconds, at least for those UTC timestamps in the future as may occur
   during the operation of the system.

   This leads me to my question: would it be helpful for POSIX implementors
   if each and every UTC timestamp came with the corresponding value of DTAI
   attached (instead of DUT1)? Would this even obviate the need for a leap
   seconds table?

   I realise that this would require changes or extensions to the time
   interfaces of POSIX (eg, a "time_t" value alone could no longer encode a
   complete timestamp). My question is just whether such timestamps,
   indicating both UTC as time-of-day and TAI as "interval time", could
   be a viable alternative to the frequent updates of leap second tables.

   Michael Deckers


Re: Introduction of long term scheduling

2007-01-03 Thread Magnus Danielson
From: Tony Finch <[EMAIL PROTECTED]>
Subject: Re: [LEAPSECS] Introduction of long term scheduling
Date: Wed, 3 Jan 2007 17:38:35 +
Message-ID: <[EMAIL PROTECTED]>

> On Wed, 3 Jan 2007, Magnus Danielson wrote:
> >
> > Assuming you have corrected for another gravitational field, yes. The
> > current SI second indirectly assumes a certain gravitational force, we
> > is assumed to be "at sea level" whatever level that is.
>
> Wrong. The SI second is independent of your reference frame, and is
> defined according to Einstein's principle of equivalence.

Good point. Thanks for reminding me.

> What *does* depend on the gravitational potential at the geoid is TAI
> (and TT), since a timescale (unlike a fundamental unit) is relative to a
> reference frame.

When comparing two realizations of an SI second, compensation of the difference
in the reference frame needs to be done. To build up TAI, difference in
gravitational force do need to be compensated out.

> > We still depend on geophysics to some degree.
>
> Note that the standard relativistic transformations between TT, TCG, and
> TCB is (since 2000) independent of the geoid. So although the realization
> of these timescales is dependent on geophysics (because the atomic clocks
> they are ultimately based on are sited on the planet) the mathematical
> models try to avoid it.

Naturally.

Cheers,
Magnus


Re: Introduction of long term scheduling

2007-01-03 Thread Tony Finch
On Wed, 3 Jan 2007, Magnus Danielson wrote:
>
> Assuming you have corrected for another gravitational field, yes. The
> current SI second indirectly assumes a certain gravitational force, we
> is assumed to be "at sea level" whatever level that is.

Wrong. The SI second is independent of your reference frame, and is
defined according to Einstein's principle of equivalence. What *does*
depend on the gravitational potential at the geoid is TAI (and TT), since
a timescale (unlike a fundamental unit) is relative to a reference frame.

> We still depend on geophysics to some degree.

Note that the standard relativistic transformations between TT, TCG, and
TCB is (since 2000) independent of the geoid. So although the realization
of these timescales is dependent on geophysics (because the atomic clocks
they are ultimately based on are sited on the planet) the mathematical
models try to avoid it.

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
SOLE LUNDY FASTNET IRISH SEA: SOUTHWEST VEERING WEST OR NORTHWEST 7 TO SEVERE
GALE 9, LATER DECREASING 4 OR 5. ROUGH OR VERY ROUGH, OCCASIONALLY HIGH IN
WEST SOLE. RAIN THEN SCATTERED SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-03 Thread Rob Seaman

Peter Bunclark wrote:


Hang on a minute, statistically planets in the Solar System do not
have a
large moon and yet are "upright"; for example Mars comes very close
to the
conditions required to generate a leapseconds email exploder.


I checked the DDA box on my AAS form, but nobody would mistake me for
a dynamical astronomer.  Folks will note that all my arguments focus
on the requirements for civil timekeeping, not on the many, various
and sundry technically distinct scales used for "professional" purposes.

I actually think all the nice folks at the alphabet soup of
international agencies have been doing a swell job of timekeeping
(even given occasional formatting anachronisms associated with
Bulletins C & D, for instance).  Those who are pushing the absurd
notion of leap hours simply need to be protected from themselves.  It
might also be nice if they called on experts in other technical
disciplines to help resolve these issues and to improve time
standards and infrastructure.

In any event, the requirements placed on technical timekeeping tend
to be simpler (if more rigorous) and thus are easier to meet.  Civil
timekeeping is the real challenge - just like creating functioning
legal and electoral systems for the masses, etc.

Your argument on this point is not with me, but with Peter Ward and
Donald Brownlee.  I heartily recommend their book, "Rare Earth:  Why
Complex Life is Uncommon in the Universe".  I was somewhat skeptical
about this, too.  As you point out, upright planets and satellites
appear not to be statistically uncommon.  Perhaps someone who knows
the literature in this area could provide some references?

Also note that the various effects (see the wikipedia "rare earth"
page) aren't separable.  An ocean appears to be needed for plate
tectonics, as may a large moon.  Planets or satellites orbiting near
to their primary will be tidally locked (or in interesting
resonances).  The ocean is of obvious importance to providing an
environment stable enough over the long term to nursemaid complex
organisms, but plate tectonics may be similarly important to buffer
atmospheric CO2 through sequestration via subduction.  The planet
should rotate under its star to provide even illumination and
heating.  Etc.

Rob


Re: Introduction of long term scheduling

2007-01-03 Thread Magnus Danielson
From: Poul-Henning Kamp <[EMAIL PROTECTED]>
Subject: Re: [LEAPSECS] Introduction of long term scheduling
Date: Wed, 3 Jan 2007 11:45:52 +
Message-ID: <[EMAIL PROTECTED]>

> In message <[EMAIL PROTECTED]>, Peter Bunclark writes:
>
> >> Without the Moon, the Earth could nod through large angles, lying on
> >> its side or perhaps even rotating retrograde every few million
> >> years.  Try making sense of timekeeping under such circumstances.
>
> You mean like taking a sequence of atomic seconds, counting them
> in a predicatable way and be happy that timekeeping has nothing
> to do with geophysics ?
>
> Yeah, I could live with that.

Assuming you have corrected for another gravitational field, yes. The current
SI second indirectly assumes a certain gravitational force, we is assumed to be
"at sea level" whatever level that is. Oh, should we move our Cesiums up and
down with the tides which the moon arranges for us? Moder nature provides so
many nice modulators. :o)

We still depend on geophysics to some degree.

Now, if we could find the mass center of the universe, propell away a really
good atomic clock constallation and use that for our time reference we should
be off to a good start. No?

> >Hang on a minute, statistically planets in the Solar System do not have a
> >large moon and yet are "upright"; for example Mars comes very close to the
> >conditions required to generate a leapseconds email exploder.
>
> As far as I know the atmosphere is far to cold for that :-)

No problem. With the heated discussions going on here it would be no problem
maintaining the temperature up. :o)

Cheers,
Magnus


Re: Introduction of long term scheduling

2007-01-03 Thread Peter Bunclark
On Wed, 3 Jan 2007, Poul-Henning Kamp wrote:
>
> >Hang on a minute, statistically planets in the Solar System do not have a
> >large moon and yet are "upright"; for example Mars comes very close to the
> >conditions required to generate a leapseconds email exploder.
>
> As far as I know the atmosphere is far to cold for that :-)

Similar to our polar regions where whales scoff krill all summer long!

A bit more mass -> bit more atmospheric pressure, and ok maybe a bit
closer to the Sun...

Of course, life may have flourished on Mars 3 billion years ago and then
the Martians introduced the leap hour and the rest is pre-history...

Pete.


Re: Introduction of long term scheduling

2007-01-03 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Peter Bunclark writes:

>> Without the Moon, the Earth could nod through large angles, lying on
>> its side or perhaps even rotating retrograde every few million
>> years.  Try making sense of timekeeping under such circumstances.

You mean like taking a sequence of atomic seconds, counting them
in a predicatable way and be happy that timekeeping has nothing
to do with geophysics ?

Yeah, I could live with that.

>Hang on a minute, statistically planets in the Solar System do not have a
>large moon and yet are "upright"; for example Mars comes very close to the
>conditions required to generate a leapseconds email exploder.

As far as I know the atmosphere is far to cold for that :-)

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-03 Thread Peter Bunclark
On Tue, 2 Jan 2007, Rob Seaman wrote:

> Daniel R. Tobias replies to Poul-Henning Kamp:
>
> >> Has anybody calculated how much energy is required to change
> >> the Earths rotation fast enough to make this rule relevant ?
> >
> > Superman could do it.  Or perhaps he could nudge the Earth's rotation
> > just enough to make the length of a mean solar day exactly equal
> > 86,400 SI seconds.
>
> Only briefly.  Consider the LOD plots from http://www.ucolick.org/
> ~sla/leapsecs/dutc.html.  The Earth wobbles like a top, varying its
> speed even if tidal slowing is ignored.
>
> Actually, rather than being merely a troublemaker, the Moon serves to
> stabilize the Earth's orientation.  The "Rare Earth Hypothesis" makes
> a strong case that a large Moon and other unlikely processes such as
> continental drift are required for multicellular life to evolve, in
> addition to the more familiar issues of a high system "metal" content
> and a stable planetary orbit at a distance permitting liquid water.
> Without the Moon, the Earth could nod through large angles, lying on
> its side or perhaps even rotating retrograde every few million
> years.  Try making sense of timekeeping under such circumstances.
>
> Rob Seaman
> NOAO

Hang on a minute, statistically planets in the Solar System do not have a
large moon and yet are "upright"; for example Mars comes very close to the
conditions required to generate a leapseconds email exploder.

Pete.


Re: Introduction of long term scheduling

2007-01-02 Thread Rob Seaman

Poul-Henning Kamp wrote:


That's an interesting piece of data in our endless discussions
about how important DUT1 really is...


The point is that by allowing it to grow without reasonable bound,
DUT1 would gain an importance it never had before.


Re: Introduction of long term scheduling

2007-01-02 Thread Rob Seaman

Daniel R. Tobias replies to Poul-Henning Kamp:


Has anybody calculated how much energy is required to change
the Earths rotation fast enough to make this rule relevant ?


Superman could do it.  Or perhaps he could nudge the Earth's rotation
just enough to make the length of a mean solar day exactly equal
86,400 SI seconds.


Only briefly.  Consider the LOD plots from http://www.ucolick.org/
~sla/leapsecs/dutc.html.  The Earth wobbles like a top, varying its
speed even if tidal slowing is ignored.

Actually, rather than being merely a troublemaker, the Moon serves to
stabilize the Earth's orientation.  The "Rare Earth Hypothesis" makes
a strong case that a large Moon and other unlikely processes such as
continental drift are required for multicellular life to evolve, in
addition to the more familiar issues of a high system "metal" content
and a stable planetary orbit at a distance permitting liquid water.
Without the Moon, the Earth could nod through large angles, lying on
its side or perhaps even rotating retrograde every few million
years.  Try making sense of timekeeping under such circumstances.

Rob Seaman
NOAO


Re: Introduction of long term scheduling

2007-01-02 Thread Daniel R. Tobias
On 2 Jan 2007 at 19:40, Poul-Henning Kamp wrote:

> Has anybody calculated how much energy is required to change
> the Earths rotation fast enough to make this rule relevant ?

Superman could do it.  Or perhaps he could nudge the Earth's rotation
just enough to make the length of a mean solar day exactly equal
86,400 SI seconds.

--
== Dan ==
Dan's Mail Format Site: http://mailformat.dan.info/
Dan's Web Tips: http://webtips.dan.info/
Dan's Domain Site: http://domains.dan.info/


Re: Introduction of long term scheduling

2007-01-02 Thread James Maynard

Ed Davies wrote:


Still, it's a strange assumption, given that TF.640 allows, I
understand, leaps at the end of any month.  Unofficially, the
wording seems to be:


A positive or negative leap-second should be the last second
of a UTC month, but first preference should be given to the end
of December and June, and second preference to the end of March
and September.


Anybody got access to a proper copy and can say whether that's
right or not?  If it is right then the Wikipedia article on leap
seconds needs fixing.



The text you quoted is taken exactly fromITU-R Recommendation TF.640-4,
Annex I ("Time Scales"), paragraph D ("DUT1"), sub-paragraph 2
("Leap-seconds"):

2.1   A positive or negative leap-second should be the last second of
a UTC month, but first preference should be given to the end of
December and June, and second preference to the end of March
and September.

2.2   A positive leap-second begins at 23h 59m 60s and ends at 0h 0m 0s
of the first day of the following month. In the case of a negative
leap-seoond, 23h 59m 58s will be followed one second later by 0h 0m 0s
of the first day of the following month (see Annex III).

2.3   The IERS should decide upon and announce the introduction of a
leap-second, such announcemtn to be made at least eight weeks in advance.


--
James Maynard, K7KK
Salem, Oregon, USA


Re: Introduction of long term scheduling

2007-01-02 Thread Zefram
Warner Losh wrote:
> Right now DUT1 is
>+0.0s until further notice.  From the last few B's, it looks like this
>is decreasing at about 300ms per year.  This suggests that the next
>leap second will be end of 2008.

The way DUT1 is behaving at the moment, it looks like an ideal time for
IERS to experiment with scheduling further ahead.  It should be easy
to commit today to having no leap second up to and including 2007-12,
as a first step.  Well, we can hope.

-zefram


Re: Introduction of long term scheduling

2007-01-02 Thread Ed Davies

Warner Losh wrote:

The IERS bulletin C is a little different than the ITU TF.460:


Leap seconds can be introduced in UTC at the end of the months of  December
or June,  depending on the evolution of UT1-TAI. Bulletin C is mailed every
six months, either to announce a time step in UTC, or to confirm that there
will be no time step at the next possible date.


Unfortunately, these IERS bulletins are dreadfully badly worded and
seem to assume current practice rather than fully defining what they
mean.  E.g., Bulletin C 32, dated 19 July 2006

  http://hpiers.obspm.fr/iers/bul/bulc/bulletinc.dat

says:


NO positive leap second will be introduced at the end of December 2006.


So we still don't know officially if there was a negative leap second
then and we still don't officially know if there will be a leap second
at the end of this month.

  http://hpiers.obspm.fr/iers/bul/bulc/BULLETINC.GUIDE

says:


UTC is defined by the CCIR Recommendation 460-4 (1986). It differs
from TAI by an integral number of seconds, in such a way that UT1-UTC stays
smaller than 0.9s in absolute value. The decision to introduce a leap second
in UTC to meet this condition is the responsability of the IERS. According to
the CCIR Recommendation, first preference is given to the opportunities at the
end of December and June,and second preference to those at the end of March
and September. Since the system was introduced in 1972 only dates in June and
December have been used.


Again, this is the truth but not the whole truth as it doesn't mention
the third preference opportunities at the ends of other months - but
it'll be a while until those are needed.

(Also, they can't spell "responsibility" :-)

Ed.


Re: Introduction of long term scheduling

2007-01-02 Thread M. Warner Losh
In message: <[EMAIL PROTECTED]>
John Cowan <[EMAIL PROTECTED]> writes:
: Warner Losh scripsit:
:
: > There's no provision for emergency leapseconds.  They just have to be
: > at the end of the month, and annoucned 8 weeks in advance.  IERS has
: > actually exceeded this mandate by announcing them ~24 weeks in advance
: > in recent history.
:
: So much the worse.  That means that if the Earth hiccups on March 7, the
: value of |DUT1| will not return to normal until May 31.

Yes.  But it would take a change in angular momementum would likely
mean that |DUT1| being a little too large would be the least of our
worries.

The earthquake that hit Indonesia last year changed the time of day by
microseconds.  What would cause a sudden jump of hundreds of
milliseconds hurts my brain to contemplate.

Warner


Re: Introduction of long term scheduling

2007-01-02 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Tony Fin
ch writes:
>On Tue, 2 Jan 2007, Warner Losh wrote:
>>
>> Curiously, BIH is currently, at least in the document I have, expected
>> to predict what the value of DUT1 is to .1 second at least a month in
>> advance so that frequency standard broadcasts can prepare for changes
>> of this value a month in advance.  There's an exception for IERS to
>> step in two weeks in advance if the earth's rotation rate hickups.
>
>I was amused by the dates in
>http://hpiers.obspm.fr/eoppc/bul/buld/bulletind.94

That's an interesting piece of data in our endless discussions about
how important DUT1 really is...

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-02 Thread Tony Finch
On Tue, 2 Jan 2007, Warner Losh wrote:
>
> Curiously, BIH is currently, at least in the document I have, expected
> to predict what the value of DUT1 is to .1 second at least a month in
> advance so that frequency standard broadcasts can prepare for changes
> of this value a month in advance.  There's an exception for IERS to
> step in two weeks in advance if the earth's rotation rate hickups.

I was amused by the dates in
http://hpiers.obspm.fr/eoppc/bul/buld/bulletind.94

Tony.
--
f.a.n.finch  <[EMAIL PROTECTED]>  http://dotat.at/
BAILEY: SOUTHERLY VEERING WESTERLY 6 TO GALE 8, PERHAPS SEVERE GALE 9 LATER.
VERY ROUGH OR HIGH. RAIN OR SHOWERS. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-02 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, John Cowan writes:
>Warner Losh scripsit:
>
>> There's no provision for emergency leapseconds.  They just have to be
>> at the end of the month, and annoucned 8 weeks in advance.  IERS has
>> actually exceeded this mandate by announcing them ~24 weeks in advance
>> in recent history.
>
>So much the worse.  That means that if the Earth hiccups on March 7, the
>value of |DUT1| will not return to normal until May 31.

Given the angular momentum required for such a hiccup, I think we would
have more prominent problems than DUT1>1.0s

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-02 Thread John Cowan
Warner Losh scripsit:

> There's no provision for emergency leapseconds.  They just have to be
> at the end of the month, and annoucned 8 weeks in advance.  IERS has
> actually exceeded this mandate by announcing them ~24 weeks in advance
> in recent history.

So much the worse.  That means that if the Earth hiccups on March 7, the
value of |DUT1| will not return to normal until May 31.

--
John Cowan[EMAIL PROTECTED]http://ccil.org/~cowan
The whole of Gaul is quartered into three halves.
-- Julius Caesar


Re: Introduction of long term scheduling

2007-01-02 Thread Warner Losh
> Warner Losh scripsit:
>
> > There's an exception for IERS to
> > step in two weeks in advance if the earth's rotation rate hickups.
>
> So if I understand this correctly, there could be as many as 14
> consecutive days during which |DUT1| > 0.9s before the emergency leap
> second can be implemented; consequently, the current guarantee is only
> statistical, not absolute.

I think I understand differently.  BIH says on Jan 1 that the
Februrary value of DUT1 is 0.2ms.  If the earth hickups, IERS can step
in by Jan 15th and say, no, the real correct value is 0.3ms.

There's no provision for emergency leapseconds.  They just have to be
at the end of the month, and annoucned 8 weeks in advance.  IERS has
actually exceeded this mandate by announcing them ~24 weeks in advance
in recent history.

The IERS bulletin C is a little different than the ITU TF.460:

>>Leap seconds can be introduced in UTC at the end of the months of  December
>>or June,  depending on the evolution of UT1-TAI. Bulletin C is mailed every
>>six months, either to announce a time step in UTC, or to confirm that there
>>will be no time step at the next possible date.

IERS is issuing Bulletin B as needed.  The latest one can be found at
ftp://hpiers.obspm.fr/iers/bul/buld/bulletind.dat .  Right now DUT1 is
+0.0s until further notice.  From the last few B's, it looks like this
is decreasing at about 300ms per year.  This suggests that the next
leap second will be end of 2008.

Warner


Re: Introduction of long term scheduling

2007-01-02 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, John Cowan writes:
>Warner Losh scripsit:
>
>> There's an exception for IERS to
>> step in two weeks in advance if the earth's rotation rate hickups.
>
>So if I understand this correctly, there could be as many as 14
>consecutive days during which |DUT1| > 0.9s before the emergency leap
>second can be implemented; consequently, the current guarantee is only
>statistical, not absolute.

But is it physically relevant ?

Has anybody calculated how much energy is required to change
the Earths rotation fast enough to make this rule relevant ?

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-02 Thread John Cowan
Warner Losh scripsit:

> There's an exception for IERS to
> step in two weeks in advance if the earth's rotation rate hickups.

So if I understand this correctly, there could be as many as 14
consecutive days during which |DUT1| > 0.9s before the emergency leap
second can be implemented; consequently, the current guarantee is only
statistical, not absolute.

--
John Cowan  http://www.ccil.org/~cowan  [EMAIL PROTECTED]
"After all, would you consider a man without honor wealthy, even if his
Dinar laid end to end would reach from here to the Temple of Toplat?"
"No, I wouldn't", the beggar replied.  "Why is that?" the Master asked.
"A Dinar doesn't go very far these days, Master.--Kehlog Albran
Besides, the Temple of Toplat is across the street."  The Profit


Re: Introduction of long term scheduling

2007-01-02 Thread Warner Losh
> Still, it's a strange assumption, given that TF.640 allows, I
> understand, leaps at the end of any month.  Unofficially, the
> wording seems to be:
>
> > A positive or negative leap-second should be the last second
> > of a UTC month, but first preference should be given to the end
> > of December and June, and second preference to the end of March
> > and September.
>
> Anybody got access to a proper copy and can say whether that's
> right or not?  If it is right then the Wikipedia article on leap
> seconds needs fixing.

The above is a direct quote from ITU-R-TF.460-4, annex I, section
D.2.1.

Secont D.2.3 is the one that many people here would like to change in
some way, usually the time period:

"The IERS should decide upon and annoucne the introduction of a
leap-second, such an announcement to be made at least eight weeks in
advance."

Which many people would like see extended to a number of years.  Heck,
a simple step would be to announce every 6 months the leap seconds for
a year, but I'll bet even that step would be politically difficult.

Curiously, BIH is currently, at least in the document I have, expected
to predict what the value of DUT1 is to .1 second at least a month in
advance so that frequency standard broadcasts can prepare for changes
of this value a month in advance.  There's an exception for IERS to
step in two weeks in advance if the earth's rotation rate hickups.

Warner


Re: Introduction of long term scheduling

2007-01-02 Thread Steve Allen
On Tue 2007-01-02T18:36:45 +, Ed Davies hath writ:
> >A positive or negative leap-second should be the last second
> >of a UTC month, but first preference should be given to the end
> >of December and June, and second preference to the end of March
> >and September.
>
> Anybody got access to a proper copy and can say whether that's
> right or not?  If it is right then the Wikipedia article on leap
> seconds needs fixing.

That's a direct quote.

--
Steve Allen <[EMAIL PROTECTED]>WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-02 Thread Ed Davies

Steve Allen wrote:

On Mon 2007-01-01T21:19:04 +, Ed Davies hath writ:

Why does the "One sec at predicted intervals" line suddenly
diverge in the early 2500's when the other lines seem to just
be expanding in a sensible way?

...
I suspect that the divergence of the one line indicates that the LOD
has become long enough that 1 s can no longer keep up with the
divergence using whatever predicted interval he chose.  I suspect that
the chosen interval was every three months, for it is in about the
year 2500 that the LOD will require 4 leap seconds per year.


Yes, that make sense.  I worked out what LOD increases he'd have
to be assuming for one or 6 monthly leaps and neither seemed right.
Should have realised that it was in between.

Still, it's a strange assumption, given that TF.640 allows, I
understand, leaps at the end of any month.  Unofficially, the
wording seems to be:


A positive or negative leap-second should be the last second
of a UTC month, but first preference should be given to the end
of December and June, and second preference to the end of March
and September.


Anybody got access to a proper copy and can say whether that's
right or not?  If it is right then the Wikipedia article on leap
seconds needs fixing.


As for the other questions, McCarthy had been producing versions of this
plot since around 1999, but the published record of them is largely
in PowerPoint.  Dr. Tufte has provided postmortems of both  Challenger
and Columbia as testaments to how little that medium conveys.


Indeed, this slide hasn't got us much closer to understanding the
original problem, namely: what is maximum error likely to be over
a decade.

Ed.


Re: Introduction of long term scheduling

2007-01-01 Thread Steve Allen
On Mon 2007-01-01T21:19:04 +, Ed Davies hath writ:
> Why does the "One sec at predicted intervals" line suddenly
> diverge in the early 2500's when the other lines seem to just
> be expanding in a sensible way?

Upon looking closer I see a 200 year periodicity in the plot.
I begin to suspect that rather than run a pseudorandom sequence of LOD
based on the power spectrum he instead took the past 2 centuries of
LOD variation around the linear trend and just kept repeating those
variations added to an ongoing linear trend.

I suspect that the divergence of the one line indicates that the LOD
has become long enough that 1 s can no longer keep up with the
divergence using whatever predicted interval he chose.  I suspect that
the chosen interval was every three months, for it is in about the
year 2500 that the LOD will require 4 leap seconds per year.

As for the other questions, McCarthy had been producing versions of this
plot since around 1999, but the published record of them is largely
in PowerPoint.  Dr. Tufte has provided postmortems of both  Challenger
and Columbia as testaments to how little that medium conveys.

--
Steve Allen <[EMAIL PROTECTED]>WGS-84 (GPS)
UCO/Lick ObservatoryNatural Sciences II, Room 165Lat  +36.99858
University of CaliforniaVoice: +1 831 459 3046   Lng -122.06014
Santa Cruz, CA 95064http://www.ucolick.org/~sla/ Hgt +250 m


Re: Introduction of long term scheduling

2007-01-01 Thread Greg Hennessy

Why does the "One sec at predicted intervals" line suddenly
diverge in the early 2500's when the other lines seem to just
be expanding in a sensible way?


As time goes on we'll need a quadratically increasing number of leap
seconds. The single leap sec at predicted intervals cannot handle that,
the other two lines allow for an arbitrary number of leap seconds at
specified times.


Re: Introduction of long term scheduling

2007-01-01 Thread Ed Davies

Steve Allen wrote:

On Mon 2007-01-01T17:42:11 +, Ed Davies hath writ:

Sorry, maybe I'm being thick but, why?  Surely the IERS could announce
all the leap seconds in 2007 through 2016 inclusive this week then
those for 2017 just before the end of this year, and so on.  We'd have
immediate 10 year scheduling.


For reasons never explained publicly this notion was shot down very
early in the process of the WP7A SRG.  It would almost certainly
exceed the current 0.9 s limit, and in so doing it would violate the
letter of ITU-R TF.460.


Yes, I was assuming exceeding the 0.9 s limit, as I'm sure the rest
of my message made clear.  We are discussing this as an alternative
to, for all intents and purposes, scrapping leaps altogether and
blowing the limit for all time, so I don't see this as a problem.

Ed.


Re: Introduction of long term scheduling

2007-01-01 Thread Ed Davies

Poul-Henning Kamp wrote:

If you have subtle point, I'd love to hear it.


Not even close to a subtle point, I simply cannot figure out what the
graph shows...


Me too.  Is this an analysis or a simulation?  What are the
assumptions?  What "predicted intervals" does he mean?

The bullet points above are very confusing as well.

What does "large discontinuities possible" mean?  Ignoring
any quibble about the use of the word "discontinuities",
does he mean more than one leap second at a particular event?
Why would anybody want to do that? - at least before we're
getting to daily leap seconds which is well off to the right
of his graph (50 000 years, or so, I think).

Why does the "One sec at predicted intervals" line suddenly
diverge in the early 2500's when the other lines seem to just
be expanding in a sensible way?

Ed.


Re: Introduction of long term scheduling

2007-01-01 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Magnus Danielson wr
ites:
>From: Poul-Henning Kamp <[EMAIL PROTECTED]>
>Subject: Re: [LEAPSECS] Introduction of long term scheduling
>Date: Mon, 1 Jan 2007 19:29:19 +
>Message-ID: <[EMAIL PROTECTED]>
>
>Poul-Henning,
>
>> In message <[EMAIL PROTECTED]>, Steve Allen writes:
>>
>> >McCarthy pretty much answered this question in 2001 as I reiterate here
>> >http://www.ucolick.org/~sla/leapsecs/McCarthy.html
>>
>> What exactly is the Y axis on this graph ?
>
>Unless you have a subtle point, I interprent it to be in seconds even if they
>are incorrectly indicated (s or seconds instead of sec would have been
>correct).
>
>If you have subtle point, I'd love to hear it.

Not even close to a subtle point, I simply cannot figure out what the
graph shows...

The sawtooth corresponding to the prediction interval raises a big red
flag for me as to the graphs applicability to reality.

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


Re: Introduction of long term scheduling

2007-01-01 Thread Poul-Henning Kamp
In message <[EMAIL PROTECTED]>, Steve Allen writes:

>One could say that it was never possible for the BIH/IERS to guarantee
>that its leap second scheduling could meet the 0.7 s and then later
>0.9 s specification because they could not be held responsible for
>things that the earth might do.  As such the IERS could conceivably
>start unilaterally issuing full decade scheduling of leap seconds and
>claim that it *was* acting in strict conformance with ITU-R TF.460.

Considering that ITU has no power over IERS, IERS is only bound
by the letter of TF.460 as far as they have volutarily promised
to be, and consequently, they could just send a letter to ITU
and say "we'll do it this way from MMDD, if you disagree,
then figure something else out."

--
Poul-Henning Kamp   | UNIX since Zilog Zeus 3.20
[EMAIL PROTECTED] | TCP/IP since RFC 956
FreeBSD committer   | BSD since 4.3-tahoe
Never attribute to malice what can adequately be explained by incompetence.


  1   2   >