Re: Introduction of long term scheduling

2007-01-08 Thread Clive D.W. Feather
Rob Seaman said:
 Which raises the question of how concisely one can express a leap
 second table.

Firstly, I agree with Steve when he asks why bother?. You're solving the
wrong problem.

However, having said that:

 So, let's see - assume:
1) all 20th century leap seconds can be statically linked
2) start counting months at 2000-01-31
 We're seeing about 7 leapseconds per decade on average, round up to
 10 to allow for a few decades worth of quadratic acceleration (less
 important for the next couple of centuries than geophysical noise).
 So 100 short integers should suffice for the next century and a
 kilobyte likely for the next 500 years.  Add one short for the
 expiration date, and a zero short word for an end of record stopper
 and distribute it as a variable length record - quite terse for the
 next few decades.  The current table would be six bytes (suggest
 network byte order):

0042 003C 

That's far too verbose a format.

Firstly, once you've seen the value 003C, you know all subsequent values
will be greater. So why not delta encode them (i.e. each entry is the
number of months since the previous leap second)? If you assume that leap
seconds will be no more than 255 months apart, then you only need one byte
per leap second. But you don't even need that assumption: a value of 255
can mean 255 months without a leap second (I'm assuming we're reserving 0
for end-of-list).

But we can do better. At present leap seconds come at 6 month boundaries.
So let's encode using 4 bit codons:

* Start with the unit size being 6 months.
* A codon of 1 to 15 means the next leap second is N units after the
  previous one.
* A codon of 0 is followed by a second codon:
  - 1, 3, 6, or 12 sets the unit size;
  - 0 means the next item is the expiry date, after which the list ends
  (this assumes the expiry is after the last leap second; I wasn't
  clear if you expect that always to be the case);
  - 15 means 15 units without a leap second;
  - other values are reserved for future expansion.

So the present table is A001. Two bytes instead of six.

If we used 1980 as the base instead of 2000, the table would be:

3224 5423 2233 3E00 1x

where the last byte can have any value for the last 4 bits.

I'm sure that some real thought could compress the data even more; based on
leap second history, 3 byte codons would probably be better than 4.

--
Clive D.W. Feather  | Work:  [EMAIL PROTECTED]   | Tel:+44 20 8495 6138
Internet Expert | Home:  [EMAIL PROTECTED]  | Fax:+44 870 051 9937
Demon Internet  | WWW: http://www.davros.org | Mobile: +44 7973 377646
THUS plc||


Re: Introduction of long term scheduling

2007-01-08 Thread Zefram
Poul-Henning Kamp wrote:
We certainly don't want to transmit the leap-second table with every
single NTP packet, because, as a result, we would need to examine
it every time to see if something changed.

Once we've got an up-to-date table, barring faults, we only need to check
to see whether the table has been extended further into the future.
If we put the expiry date first in the packet then that'll usually be
just a couple of machine instructions to know that there's no new data.

If an erroneous table is distributed, we want to pick up corrections
eventually, but we don't have to check every packet for that.  Not that
it would be awfully expensive to do so, anyway.

Furthermore, you will not getaround a strong signature on the
leap-second table, because if anyone can inject a leap-second table
on the internet, there is no end to how much fun they could have.

This issue applies generally with time synchronisation, does it not?
NTP has authentication mechanisms.

-zefram


Re: Introduction of long term scheduling

2007-01-08 Thread Tony Finch
On Mon, 8 Jan 2007, Zefram wrote:

 Possibly TT could also be used in some form, for interval calculations
 in the pre-caesium age.

In that case you'd need a model (probably involving rubber seconds) of the
TT-UT translation. It doesn't seem worth doing to me because of the
small number of applications that care about that level of precision that
far in the past.

The main requirement for a proleptic timescale is that it is useful for
most practical purposes. Therefore it should not be excessively
complicated, such as requiring a substantially different implementation of
time in the past to time in the present. What we actually did in the past
was make a smooth(ish) transition from universal time to atomic time, so
it would seem reasonable to implement (a simplified version of) that in
our systems. In practice this means saying that we couldn't tell the
difference between universal time and uniform time before a certain date,
which we model as a leap second offset of zero.

Tony.
--
f.a.n.finch  [EMAIL PROTECTED]  http://dotat.at/
BAILEY: SOUTHWEST 5 TO 7 BECOMING VARIABLE 4. ROUGH OR VERY ROUGH. SHOWERS,
RAIN LATER. MODERATE OR GOOD.


Re: Introduction of long term scheduling

2007-01-08 Thread M. Warner Losh
In message: [EMAIL PROTECTED]
Tony Finch [EMAIL PROTECTED] writes:
: On Sat, 6 Jan 2007, M. Warner Losh wrote:
: 
:  Unfortunately, the kernel has to have a notion of time stepping around
:  a leap-second if it implements ntp.
:
: Surely ntpd could be altered to isolate the kernel from ntp's broken
: timescale (assuming the kernel has an atomic seconds count timescale)

ntpd is the one that mandates it.

One could use an atomic scale in the kernel, but nobody that I'm aware
of does.

Warner