Re: ideas for new UTC rules

2006-04-15 Thread Tim Shepard
 Am especially baffled at  why it wouldn't occur to D-Link that it was
 their responsibility to field their own NTP servers.  This is even

They don't even need to do that.  They could have simply wired into the device.   See for
more info.

-Tim Shepard

Re: Comparing Time Scales

2006-02-04 Thread Tim Shepard
 No.  The article specifically says that after it the system time gets
 to ,600, it is decremented by one and there is specific code in the
 code that returns the system time to applications that makes it stand
 still.  The second isn't *NOT* repeated.  Repeat: The second is *NOT*
 repeated in what they said.  Time stands still outside of the kernel,
 while inside the kernel the last second of the day *IS* repeated,
 hence the need for the limiter that the article talks about.

 Not all kernels keep time standing nearly still during the leap second
 (since that has other bad effects).  Some expose this decrement to the
 users.  The highlighed part of what I quoted said exactly this.

 I've actually implemented this for FreeBSD.  You are arguing theory,
 and I'm arguing the fine points of an actual, real implementation.

But there's a difference between NTP timestamps, and the details of
the implementation of a system which uses NTP for synchronization.

The NTP timestamps have more than 64-bits in them when you include the
leap warning bits.  NTP timestamps do not repeat any seconds when a
leap second occurs.

Note that in Figure 8 in RFC-1305 the sequence of NTP timestamps are:

2,871,590,399 +
2,871,590,400 +

The plus represents the leap warning bits indicating an upcoming (or
in-progress) leap second insertion.  There are four distinct seconds,
each with its own unique timestamp.  I'll extend this to include half
seconds, just to be very clear:

2,871,590,398.5 +
2,871,590,399.0 +
2,871,590,399.5 +
2,871,590,400.0 +
2,871,590,400.5 +

When the leap warning bits are included, each point in time has a
unique time stamp (to the resoultion of the NTP timestamp, which is
better than a nanosecond with all 32 bits of fraction).

A computer system could represent UTC time in a way which also makes
this clear, for example by a structure or abstract data type which
includes in it (1) the day number and (2) how-many nanoseconds are we
into this day.  When executing a leap second insertion, we would get
all the way up to 86,400,999,999,999 nano seconds in the day before we
wrapped around that field to zero and incremented the day number (one
nanosecond later).

-Tim Shepard

Re: Comparing Time Scales

2006-02-04 Thread Tim Shepard
  A computer system could represent UTC time in a way which also makes
  this clear, for example by a structure or abstract data type which
  includes in it (1) the day number and (2) how-many nanoseconds are we
  into this day.  When executing a leap second insertion, we would get
  all the way up to 86,400,999,999,999 nano seconds in the day before we
  wrapped around that field to zero and incremented the day number (one
  nanosecond later).

 How is this really different from using broken-out time and allowing
 the seconds field to go up to 60?

I would agree.  Same idea.

-Tim Shepard

Re: wikipedia Leap Seconds collaboration

2006-01-23 Thread Tim Shepard
Be careful.  The goals of the folk on this mailing list and the goals
of the wikipedia project are probably not aligned.

In particular, note the section Wikipedia is not a publisher of
original thought.

It is certainly possible for people on this list to help improve the
wikipedia's coverage of articles related to time keeping, but the
wikipedia article is not an appropriate place for a group attempting
to hash out a consensus on a mailing list to record all of its thoughts.

-Tim Shepard

Re: The real problem with leap seconds

2006-01-13 Thread Tim Shepard
 We've recently had a question about this on this list which
 wasn't answered clearly.  MJD 27123.5 means 12:00:00 on day
 27123 if it's not a leap second day, but what does it mean
 on a day with a positive leap second?  12:00:00.5?  I think
 it only works if that level of precision doesn't matter but
 maybe some document somewhere has a convention.

I'm not the expert, but I just read through

and from what I learned there the answer appears to be that MJD can be
either MJD(UT) or MJD(TT), and leap seconds are not involved.  So MJD
27123.5 means 12:00:00. on day 27123.

   MJD(UT) 27123.5 means UT 12:00:00.0 on day 27123.

   MJD(TT) 27123.5 means TT 12:00:00.0 on day 27123.

UTC is an approximation of UT, perhaps the poorest one in the family
of UT time scales. If you care about what time it is UT to better
than one second, then UTC is probably not the right time scale for you
to be using (at least not directly).

If a fuzz of +/- 1 second doesn't bother you, then you can pretend
that UTC is UT, and things are easier.

For the time scale experts on this list, did I get that right?

-Tim Shepard

Re: The real problem with leap seconds

2006-01-10 Thread Tim Shepard
  I still think NTP should have distribute TAI, but I understand using

 Was your failure to form a past-participle a Freudian slip? I'm with you
 if you really mean NTP should distribute TAI!!!

Uh, probably yes.  I didn't even see the grammer error when I re-read
it the first time just now.

About 15 years ago I came to believe that it would have been better if
NTP distributed TAI instead of (or perhaps alongside) UTC.

And yes, I still believe that.

Now I think it would be best if TAI and UTC were both distributed by
time signals (and NTP, etc), with equal emphasis to make it clear to
all users that they have a choice to make.

Atomic time based on the SI second (TAI) and traditional time based on
earth orientation (UT) are both needed in the modern world.  Both
should be distributed.  People who have some need to synchronize
clocks should be forced to decide which kind of time would be best for
them.  (Or perhaps in some cases it would be best for them to
implement both side-by-side in their system.)

A system which distributes TAI (which never has leap seconds) and also
distributes the current number of seconds of offset for UTC, as well
as leap warnings (or continuously broadcasts the table of all known
(past and scheduled) leap seconds), would seem to be reasonable.

This would allow the decisions about what would be the best time scale
to use to be made downstream.  Build good mechanisms that allow a
variety of policies, and leave policies to those downstream of you.

My preference would be for civil time keeping to continue to be tied
to earth orientation, as it was when GMT was the standard.  So UT1 or
UTC would continue to be normal time, and TAI (or something like it)
would be the weird time that certain geeks care about.

The other alternative would be for civil timekeeping to be based on
TAI (something which never has leap seconds), with UTC (or something
like it) to be the weird time that certain geeks care about.  This
is the radical proposal, but I can understand that some would want to
do this.

If humans spread out to other places besides the earth, an
earth-centric time scale might begin to seem somewhat quaint.
Distributing leap second information to a Mars colony seems kind of
silly.  (Though I guess that those on a Mars colony would in fact care
about earth orientation, e.g. if they wished to communicate with
friends back on Earth using their amateur laser-communication gear in
their backyards.)

I very much dislike the proposal to *redefine* UTC to abolish leap
seconds.  I dislike very much trying to understand code that was
written with descriptive names (for variables, functions, constants,
etc) but which has evolved such that what the names apparently mean
and what they really mean are very different.  UTC is a type of UT
time.  If you stop putting leap seconds in UTC to keep it close to all
the other UT time scales, then it no longer deserves to have a name
that starts with UT.

So fine, if we must stop maintaining UTC with leap seconds and move
civil time keeping users to some sort of new standard, please do *not*
call it UTC after the change.

The hack of having UTC ticks align with TAI ticks and adjusting UTC
with leap seconds was perhaps not the best idea.  But it was done, and
has been in place for more than 30 years, and is now a widely
implemented and understood standard.  If this hack should be replaced
with something better (and perhaps it should be), I'd want 20 years
advance notice that a change is coming, and 15 years advance notice as
to what exactly the change will be.  (I suspect though I won't get
that much notice.)

leap hours are a horrible idea, whether they be leap hours inserted
in to some UTC-like global standard, or by local jurisdictions.

Well, those are my opinions.   Thanks for listening.

-Tim Shepard

Re: The real problem with leap seconds

2006-01-09 Thread Tim Shepard
 and you still cannot even get it [TAI] reliably from your
 average local NTP server.

 This is a circular argument:  The reason NTP doesn't provide it
 is that time_t needs UTC.

No, I asked David Mills about 15 or so years ago why NTP distributes
UTC and not TAI (me thinking and suggesting that it would have been so
much better if NTP distributed TAI) and the reason was quite simple:

There was no convenient way to get TAI.  The time signals broadcast by
WWV and WWVB in the US distributed UTC and leap warning bits, but did
not distribute (and still do not AFAIK) what the TAI offset is.

GPS receivers were (very) rare back then, so getting GPS time and
subtracting out the constant offset to get back to TAI was not a
viable option either (though perhaps it would be today, as long as the
GPS system keeps running).

I still think NTP should have distribute TAI, but I understand using
TAI was not practicable option when NTP was designed.

BTW, I don't know if the Fuzzball OS had any Posix time_t's in it, or
anything resembling them, but I suspect not.  I vaguely recall hearing
that it had some other way of keeping the time in a collection of
16-bit registers (PDP-11s, don't you know).

-Tim Shepard

Re: went pretty dang smoothly at this end

2006-01-01 Thread Tim Shepard
 The first officer gave us a countdown to midnight in London, and
 I'm happy to report that the plane failed to fall out of the sky,
 explode, or otherwise deviate from its course at 23:59:60.

Did his countdown reach zero at 23:59:60 31-December-2005 UTC or
at 00:00:00 1-January-2006 UTC ?

-Tim Shepard

knowing what time it is

2005-08-31 Thread Tim Shepard
I've been lurking on this list for a few months now.

About 15 years ago I was playing with NTP on 4.3 BSD unix.

I remember thinking then that Posix was making a serious error in
specifiying that the time_t returned by time() or in the .tv_sec
field of the structure returned by gettimeofday() would contain UTC.
TAI would have seemed to be the better choice.

I suggested in e-mail to David Mills that NTP should have been built
around TAI, not UTC.   He did not disagree with me but pointed out
that there were no broadcast sources that he could get TAI from, so
the choice of UTC was forced upon him.

I still think TAI would have been the better choice, and would be the
better choice going forward.

But existing practice is slow to change, and not easy to change.

I think what happened 33 or so years ago is that we went from having
a single time scale (GMT) to having multiple time scales (UTC and
TAI).  What should happen (and what should have happened) is that
both time scales should be distributed, side by side, requiring those
who need to know what time it is to make a choice about which kind of
time scale they wish to have.   There's no use pretending that we
don't have two time scales.   We can argue forever about which should
be the prefered normal time scale.   But regardless of who wins
that argument, the losing time scale will not cease to exist.

The discussion on this list has been enlightening.  I now see that no
solution will be simple.

But in my mind, the ideal would be if every system that distributes
time (e.g. WWV, WWVB, GPS, NTP, etc...) would convey what time it is
UTC, what time it is TAI, and what the entire table of leap seconds is
(or the 200 or so most recent leap seconds).  And systems downstream
should try hard not to throw away any of the information, and pass
all of it along to whoever gets time from them.

I think much of the trouble we are in now is because we have not
embraced this notion that there are two timescales involved here.
But since 1972 there are two.  Neither single timescale will be
suitable for all uses.  So both are needed.  Systems that have any
claim to general-puruposefulness need to carry both, along with
information about what leap seconds have been and will be inserted.

If the bundle of information you got that told you what time it is
also told you what leap seconds have been and will be inserted, then
all would be OK.

Then 6 months would be plenty of warning for pending leap seconds
because very few devices are capable of keeping time with 150 ppb
accuracy over 6 months running open loop.

The mistake is to distribute time without distributing both scales
(UTC and TAI) and a table of historical and pending leap seconds.

So that's all ideal.

But we're in a mess now.

Is it reasonable to hope we may be able to somehow get to the ideal
I've described?   In maybe 10 or 15 years?

It seems what is needed most is education.

-Tim Shepard