Re: 24:00 versus 00:00

2006-02-17 Thread Markus Kuhn
Clive D.W. Feather wrote on 2006-02-17 05:58 UTC:
 However, London Underground does print 24:00 on a ticket issued at
 midnight, and in fact continues up to 27:30 (such tickets count as being
 issued on the previous day for validity purposes, and this helps to
 reinforce it).

The tickets of UK train operators are perhaps not good examples to infer
common standards practice, because they deliberately print them with
highly creative *non-standard* conventions, to make fake tickets easier
to spot for their staff. For example, the 3-letter month abbreviations
seem to change from year to year, where march can be MAR, MCH, MRH,
MRC, etc.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: Ambiguous NTP timestamps near leap second

2006-02-16 Thread Markus Kuhn
M. Warner Losh wrote on 2006-02-14 21:18 UTC:
 ntp time stampes are ambiguous at leap seconds (the leap indicator
 bits aren't part of the time stamp, exactly), which is a good reason
 to change them.  However, the cost to change them could be quite high
 if done in an incompatible manner.

No, this ambiguity alone is surely not a good enough reason to change.
It can be fixed trivially in a backwards compatible way, without
introducing TAI or leap-second status bits. Simply add to the NTP
specification the paragraph:

  No reply from an NTP server shall ever represent any point in time
  between 23:59:60 and 24:00:00 of a UTC day. If a client requests an
  NTP timestamp and the response would represent a point in time during
  an inserted leap second, then this request shall be ignored.

Rationale: NTP runs over UDP, and therefore must be (and is perfectly)
able to cope with occasionally dropped packets. Dropping NTP server
reply packets for one second every few years will hardly affect any
application, because NTP clients already implement robust retry
and filtering mechanism today.


NTP is actually one of the simple cases, where no harm is done by simply
going offline very briefly near a leap second. In other applications,
e.g. streaming media, things are not that easy, and other workarounds
are needed if one wants to remain backwards compatible with existing UTC
practice while minimizing the chance of leap-second induced
malfunctions. UTC-SLS is what I'd recommend for most of these cases (but
not for the NTP protocol itself).

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: 24:00 versus 00:00

2006-02-16 Thread Markus Kuhn
Steve Allen wrote on 2006-02-16 19:25 UTC:
No reply from an NTP server shall ever represent any point in time
between 23:59:60 and 24:00:00 of a UTC day.

 Minor point, I think it has to read more like this

 between 23:59:60 of a UTC day that ends with a positive leap
 second and 00:00:00 of the subsequent UTC day.

I disagree.

With the 24-h notation, it is a very useful and well-established
convention that 00:00 refers to midnight at the start of a date, while
24:00 refers to midnight at the end of a date. Thus, both today 24:00
and tomorrow 00:00 are fully equivalent representations of the same
point in time. The 24:00 notation for midnight is very useful for
writing time intervals that end on midnight. Today 23:00-24:00 is simply
much neater and less cumbersome than today 23:00 - tomorrow 00:00.

Writing 24:00 to terminate a time interval at exactly midnight is
pretty common practice and is even sanctioned by ISO 8601. None of this
contradicts the rule that for unambiguous representations of independent
points in time (e.g. on a clock display), as opposed to interval
endpoints, only the 00:00 form should be used.

See for example the railway timetable on

  http://en.wikipedia.org/wiki/24-hour_clock

where trains arrive at 24:00 but depart at 00:00.

http://en.wikipedia.org/wiki/Midnight

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: Ambiguous NTP timestamps near leap second

2006-02-16 Thread Markus Kuhn
Rob Seaman wrote on 2006-02-16 20:28 UTC:
 In fact, to simplify coding, simply reject all requests received
 between 23:59:00 and 24:01:00.  Unlikely this would have any more
 significant effect in practice.

While there is a 24:00:00, there is certainly *no* 24:00:00.0001.
That would be 00:00:00.0001 instead.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: Internet-Draft on UTC-SLS

2006-01-19 Thread Markus Kuhn
Poul-Henning Kamp wrote on 2006-01-19 09:46 UTC:
   http://www.ietf.org/internet-drafts/draft-kuhn-leapsecond-00.txt

 The serious timekeeping people gave up on rubberseconds in 1972 and
 I will object with all that I can muster against reinventing them
 to paste over a problem that has a multitude of correct solutions.

Just for the record, am I right in assuming that your point is already
fully addressed in the UTC-SLS FAQ at

  http://www.cl.cam.ac.uk/~mgk25/time/utc-sls/#faqcrit

?

Anticipating your objection, I wrote there:

  All other objections to UTC-SLS that I heard were not directed against
  its specific design choices, but against the (very well established)
  practice of using UTC at all in the applications that this proposal
  targets:

* Some people argue that operating system interfaces, such as the
  POSIX seconds since the epoch scale used in time_t APIs,
  should be changed from being an encoding of UTC to being an encoding of
  the leap-second free TAI timescale.

* Some people want to go even further and abandon UTC and leap seconds
  entirely, detach all civilian time zones from the rotation of Earth,
  and redefine them purely based on atomic time.

  While these people are usually happy to agree that UTC-SLS is a
  sensible engineering solution *as long as UTC remains the main time
  basis of distributed computing*, they argue that this is just a
  workaround that will be obsolete once their grand vision of giving up
  UTC entirely has become true, and that it is therefore just an
  unwelcome distraction from their ultimate goal.

  I do not believe that UTC in its present form is going to
  disappear any time soon. Therefore, it makes perfect sense to me
  to agree on a well-chosen guideline for how to use UTC in a
  practical and safe way in selected applications.



This issue has been discussed dozens of times here and we all know were
we stand by now. If that is not it, then I can only guess that you did
not understand the scope of this specification:

Rubberseconds were given up in 1972 for UTC pulse-per-second time-codes,
like the ones many time-code radio stations send out. I think everyone
still agrees (me certainly!) that they are not practical there for
exactly the reasons explained in the above Internet Draft. I do *not*
propose to change the definition of UTC in ITU-R TF.460 in any way here.

This proposal is primarily about operating system APIs, where rubber
seconds have been in use by serious timekeeping people at least since
4.3BSD introduced the adjtime() system call. Up to 500 ppm rubber
seconds are used today by many NTP servers to resynchronize the kernel
clock, Microsoft's Windows XP NTP implementation is slightly less
cautious and uses rubber seconds with up to +/- 400 ppm for
resynchronizing its kernel clock. In fact, if you look at *any* form of
PLL (circuit or software), then you will find that its very purpose is
to implement rubber seconds, that is to implement phase adjustments
via low-pass filtered temporary changes in frequency.

People have been using PLL rubber seconds in operating systems for
quite a long time now, and this practice is still widely considered the
state-of-the-art way of implementing a UTC-synchronized clock. All that
was missing so far is a commonly agreed standard on what exactly such a
PLL-rubber-second clock should do near a leap second, such that all PLL
clocks in a distributed system can perform the rubber leap second in
*exactly* the same way.

The above proposal is *not* about redefining UTC in any way. It is
merely a guideline for *interpreting* UTC in Internet Protocols,
operating system APIs, and similar applications. Its aim is to eliminate
the hazards of some of the currently implemented far more dangerous
alternative ways of interpreting UTC in such environments [e.g. those
listed as options 1a) to 1i) in Appendix A]. Some of these alternatives
have caused quite some off behaviour on 1 January in NTP-synced
equipment.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: Internet-Draft on UTC-SLS

2006-01-19 Thread Markus Kuhn
Poul-Henning Kamp wrote on 2006-01-19 11:59 UTC:
 My objection is that you invent a new kind of seconds with new durations
 instead of sticking with the SI second that we know and love.

 Furthermore, you significantly loosen the precision specs set forth
 in the NTP protocol.

Having just observed Linux kernel crashes caused by a rubber second
that was -1 (!) SI seconds long last New Year's eve, I believe you
overestimate quite a bit what the precision specs set forth in the NTP
protocol count out there in the real world today. I hope you appreciate
that the brutal way in which some systems implement leap seconds
currently is *far* worse in any respect than UTC-SLS ever could be. If
your NTP server

  - halts the kernel clock for 1 second, or
  - steps it back by 1 second at midnight, or
  - starts to oscillate wildly and finally loses synchronization and
resets everything after 17 minutes

(all these behaviours have been observed, see ongoing NANOG or
comp.protocols.time.ntp discussions) then *this* is the worst-case
scenario that your entire NTP-synched system must be prepared for.

UTC-SLS is infinitely more harmless than a negative step in time.

 And rather than have one focused moment where things can go wrong,
 you have now streched that out over 1000 seconds.

 1000 seconds is an incredible silly chosen number in an operational
 context.  At the very least make it 15, 30 or 60 minutes.

Your choice of words occasionally leaves signs of diplomatic skill to be
desired. Anyway, to repeat what Appendix A of the spec says in more
detail:

I did play with this idea and discarded it, because having prime factors
other than 2 or 5 in the smoothing-interval length I leads to unpleasant
numbers when you explain the conversion between UTC and UTC-SLS by
example.

For instance, if we used I = 15 min = 900 s, then we get

UTC = 23:45:00.00  =  UTC-SLS = 23:45:00.00
UTC = 23:45:01.00  =  UTC-SLS = 23:45:01.00
UTC = 23:45:02.00  =  UTC-SLS = 23:45:01.998889
UTC = 23:45:03.00  =  UTC-SLS = 23:45:02.997778
...
UTC = 23:59:58.00  =  UTC-SLS = 23:59:57.00
UTC = 23:59:59.00  =  UTC-SLS = 23:59:58.00
UTC = 23:59:60.00  =  UTC-SLS = 23:59:59.00
UTC = 00:00:00.00  =  UTC-SLS = 00:00:00.00

I find that having infinite decimal fractions showing up obscures just
the real simplicity of the conversion much more. It get's much worse
when you do UTC-SLS - UTC conversion examples. That's why I prefer I =
1000 s over I = 900 s or I = 1800 s. Blame the ancient incompatibility
between Indo-Arabic and Babylonian number systems if you want.

And I do not like I = 60 min simply because

  a) see above

  b) this cannot be implemented by DCF77/HBG/etc. receivers, which get
 only 59 minutes advance warning.

  c) it could shift by 0.5 seconds deadlines on the full hour in
 time zones that differ from UTC by an odd multiple of 30 min.

 But mostly I object to it being a kludge rather than a solution.
 By pasting over problems the way you propose, you are almost
 guaranteed to prevent them from ever being resolved a right way.
 (In this context either of fixing/extending POSIX or killing leap
 seconds counts as a right way in my book.)

So my initial assertion *was* right then after all ...

[Outline proposal deleted]

 Now, please show some backbone and help solve the problem rather
 than add to the general kludgyness of computers.

Been there, done that:

  http://www.cl.cam.ac.uk/~mgk25/time/c/

About 8 years ago, when I was still young and naive as far as the real
world is concerned (you actually remind me a lot of these Sturm und
Drang days ...), I tried to convince people of a solution that I believe
goes somewhat into the direction of what you now have in mind. I had
enormous enthusiasm for comprehensive-kitchensink API approaches that
allowed me to write applications that can display 23:59:60. I still
agree that the POSIX time and time zone API can be improved
substantially. But I no longer think that any effort should be made
whatsoever to expose real-world applications to the time value 23:59:60.
I believe that UTC-SLS is not a kludge, but is a most sensible and
practical solution, *if* we accept the premise that civilian time
remains tied to UT1.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: Internet-Draft on UTC-SLS

2006-01-19 Thread Markus Kuhn
M. Warner Losh wrote on 2006-01-19 16:58 UTC:
  http://www.ietf.org/internet-drafts/draft-kuhn-leapsecond-00.txt
 The biggest objection that I have to it is that NTP servers will be at
 least .5s off, which is far outside the normal range that NTP likes to
 operate.  Unless the prceice interval is defined, you'll wind up with
 the stratum 1 servers putting out different times, which ntpd doesn't
 react well to.

Please read the proposal carefully (Section 2):

   UTC-SLS is not intended to be used as a drop-in replacement in
   specifications that are themselves concerned with the accurate
   synchronization of clocks and that have already an exactly specified
   behavior near UTC leap seconds (e.g., NTP [RFC1305], PTP [IEC1588]).

What this means is:

  - NTP still uses UTC on the wire, exactly in the same way in which it
does so far, *independent* of whether any of the NTP servers or clients
involved supply UTC-SLS to their applications.

  - NTP implementations (by this I mean the combined user-space and
kernel-space segments) should convert NTP timestamps that have been
received over the wire through the UTC - UTC-SLS mapping, and steer
after that what gettimeofday() provides to users .

  - NTP implementations should equally also convert any timestamp received
from gettimeofday through the UTC-SLS - UTC mapping before it goes
out the wire.

In other words, *no* incompatible changes are made to the NTP protocol.
In a correct UTC-SLS implementation, you should *not* be able to
distinguish remotely, whether some NTP server synchronizes its kernel
clock to UTC or UTC-SLS, because this must not influence its NTP
interface in any way.

I hope this answers your concerns.

[Pretty much the same applies not only for NTP, but also for PTP and
other timecodes.]

 I'm also concerned about the fact that many radio time codes do not
 announce the leap second pending until the last hour or less.  This
 makes it hard to propigate out to the non-stratum 1 servers.

I fully agree that leap seconds should be announced as early as
possible, and I think that anything less than a month is undesireable.
GPS sends out announcements within days after IERS does, which is
excellent service. NIST sends out announcements a month in advance on
their WW* stations, which is also pretty good. DCF77/HBG sadly do so
only 59 minutes in advance, which is not ideal, but still useful.

However, MSF has no leap warning at all, nor do some time codes used in
the power or military industry. And recent extensions to the latter
added only a leap second warning that arrives a few seconds before the
leap. I consider the leap-second handling of these latter time codes
pretty useless.

 It is a horrible idea.

Since you seem to have arrived at this initial conclusion based on a
misunderstanding of the intended interaction between NTP and UTC-SLS, I
would be interested to hear your view again, after you have appreciated
that UTC-SLS *can* be implemented in NTP software easily and robustly in a
way that is fully backwards compatible with the existing NTP protocol
and infrastructure.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


McCarthy point (was: Fixing POSIX time)

2006-01-19 Thread Markus Kuhn
M. Warner Losh wrote on 2006-01-19 19:35 UTC:
 : Therefore, if people ask me for my favourite epoch for a new time scale,
 : then it is
 :
 :   2000-03-01 00:00:00 (preferably UTC, but I would not mind much
 :if it were TAI, or even GPS time)
 :
 : This epoch has the following advantages:
 :
 :   a) It is well after TAI rubber seconds were fixed in ~1998,
 :  so we know the time of *that* epoch with much greater accuracy than
 :  any before 1998.

 TAI and UTC have ticked at the same rate since 1972.  While this rate
 has changed twice (by very small amounts, first by 1 part in 10^12 and
 then later by 2 parts in 10^14), they have been the same.  Prior to
 1972 we had both steps in time (on the order of 50ms to 100ms) as well
 as TAI and UTC having different notions of the second.

At which point we probably have reached another McCarthy point in the
discussion: Dennis D. McCarthy (USNO) observed at the ITU-R Torino meeting,
that people who talk about timescale differences in the order of a few
nanoseconds and people who talk about differences in the order of a few
seconds usually do not understand each other.

All I wanted to say is that for a good choice of epoch, it would be nice
if we agreed on it not only to within a few seconds (the leap-second
problem), but also to within a few milli- or microseconds (the SI/TAI
second problem). The latter seems much easier to do for 2000 than for
1972 or even 1958. In applications such as observing planetary motion
over many years, the difference matters.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: Internet-Draft on UTC-SLS

2006-01-19 Thread Markus Kuhn
Tim Shepard wrote on 2006-01-19 20:29 UTC:
Coordinated Universal Time with Smoothed Leap Seconds (UTC-SLS),
Markus Kuhn, 18-Jan-06. (36752 bytes)
  
http://www.ietf.org/internet-drafts/draft-kuhn-leapsecond-00.txt

 This draft bugs me a bit because it changes the length of a second (as
 seen by its clients) by a rather large amount (a thousand ppm).

If you can give me specific examples (and references) for applications
that fail with 1000 ppm short-term frequency error (only 1000 ms
cumulative phase error), but would work fine with 10 ppm (or 100 ppm)
rubber seconds, I would be most interested!

I have found the limit 500 ppm required in a number of hardware
specifications, but never with any rationale for where this number
originally comes from. One recent example is Intel's IA-PC HPET, October
2004, Table 1 (note, this is a hardware spec, not a software API), the
maximum frequency error of Intels new PC High Precition Event Timer.

A rule of thumb is that you get 20 ppm from a reasonable crystal and 200
ppm error from a really bad, but still commonly available one. So I
always understood the MPEG2 limit of 30 ppm as a requirement for
manufacturers to simply not pick the very cheapest crystals that they
can get on the market, whereas a spec of 500 ppm allows manufacturers to
do exactly that.

 A change in rate of one ppm would not bother me, but that would take a
 bit more than 11.5 days to accomplish the change.

Well, it is always a trade-off between frequency offset and duration of
the correction. I don't know any better methodology than trying to list
all application constraints that I can think of and then simply get used
to one particular pair of numbers that sits sensibly between all these
constraints. See Appendix A for what I ended up with so far.

 A change in rate of ten ppm could accomplish the phase change with
 less than 1 day's warning before the UTC leap second insertion if
 accomplishing it could be split between the 50,000 seconds before UTC
 midnight and the 50,000 seconds after UTC midnight.

Do you really like the idea of shifting midnight, the start of the new
date, by 500 ms, compared to UTC? I know, in the U.S., midnight is not a
commonly used deadline, because the U.S. 12-h a.m./p.m. time notation
has no unambiguous representation for the difference between 00:00,
12:00, and 24:00. But elsewhere, it is a fairly standard deadline, and
it would seem elegant to me to have at least that exactly identical in
both UTC and UTC-SLS, in the interest of all the real-time stock-market
and ebay traders out there and their increasing abuse of high-precision
legal time.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Internet-Draft on UTC-SLS

2006-01-18 Thread Markus Kuhn
A new Internet-Draft with implementation guidelines on how to handle UTC
leap seconds in Internet protocols was posted today on the IETF web
site:

  Coordinated Universal Time with Smoothed Leap Seconds (UTC-SLS),
  Markus Kuhn, 18-Jan-06. (36752 bytes)

  http://www.ietf.org/internet-drafts/draft-kuhn-leapsecond-00.txt

Background information, FAQ, etc.:

  http://www.cl.cam.ac.uk/~mgk25/time/utc-sls/

Abstract:

  Coordinated Universal Time (UTC) is the international standard timescale
  used in many Internet protocols. UTC features occasional single-second
  adjustments, known as leap seconds. These happen at the end of
  announced UTC days, in the form of either an extra second 23:59:60 or a
  missing second 23:59:59. Both events need special consideration in
  UTC-synchronized systems that represent time as a scalar value. This
  specification defines UTC-SLS, a minor variation of UTC that lacks leap
  seconds. Instead, UTC-SLS performs an equivalent smooth adjustment,
  during which the rate of the clock temporarily changes by 0.1% for 1000
  seconds. UTC-SLS is a drop-in replacement for UTC. UTC-SLS can be
  generated from the same information as UTC. It can be used with any
  specification that refers to UTC but lacks provisions for leap seconds.
  UTC-SLS provides a robust and interoperable way for networked UTC-
  synchronized clocks to handle leap seconds. By providing UTC-SLS instead
  of UTC to applications, operating systems can free most application and
  protocol designers from any need to even know about UTC leap seconds.

Please have a careful look at the full specification and rationale.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: The real problem with leap seconds

2006-01-09 Thread Markus Kuhn
M. Warner Losh wrote on 2006-01-09 16:57 UTC:
 There's been many many many people that have tried to fix POSIX time_t.

One person's fix is another person's recipe for disaster ...

The POSIX definition of time_t is not quite as broken as some
individuals would like you to believe. It actually does its job very
well, especially out there in the real world, where UTC is easily and
reliably available from many, many, independent channels. The same
surely could not (and probably still cannot) be said for TAI and for
automatic leap-second table updates. You cannot get TAI from the BBC
evening news, and you still cannot even get it reliably from your
average local NTP server.

(I know, we've already discussed this here, on [EMAIL PROTECTED], on
pasc-time-study, and on austin-group-l in *very* great detail, many,
many, many times, so I'll stop.)

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: HBG transmitted wrong info during leapsecond

2006-01-07 Thread Markus Kuhn
Which was also noted at

  http://wwwhome.cs.utwente.nl/~ptdeboer/ham/sdr/leapsecond.html

Various other LF 2005 leap second recordings are listed at

  http://www.cl.cam.ac.uk/~mgk25/time/lf-clocks/#leapsec2005

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: Things to do at about 2005-12-31 23:59:60Z

2005-12-31 Thread Markus Kuhn
I hope to have my lab PC record both MSF and DCF77 near 23:59:60 tonight.

Unfortunately, I lack the receiver and antenna needed for recording
GLONASS signals, which -- as I understood it -- will have a phase jump
in their time base thanks to the leap second. At least that is the
claim; it would be nice to have it actually documented by observation.

So if you have access to the necessary equipment and have nothing better
to do over midnight, you'll find the necessary technical details in

  http://www.glonass-center.ru/ICD02_e.pdf

A simple storage-oscilloscope recording at some suitable IF should be
sufficient, I'd be happy to help out with the offline demodulation and
decoding afterwards.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: Lighter Evenings (Experiment) Bill [HL]

2005-12-18 Thread Markus Kuhn
Poul-Henning Kamp wrote on 2005-12-16 22:44 UTC:
 Now that EU got extended way east I think everybody expected the
 silly notion of one timezone for all of EU to die a rapid death
 but that might just wishfull thinking.

Mankind could go even further, abandon the entire notion of time zones
being separated by 15� meridians and move on to generous continental
time zones.

This brings us even back closer to the topic of this list: Why is it
important that our clocks give a +/- 30 minutes approximation of local
astronomical time? Sure, there seem clear advantages in having midnight
happen when most people are asleep, or at least outside extended
business hours. So having everyone on UT is not very attractive for
those living more than +/-3 hours from the prime meridian. But since
most of us sleep at least 6 hours and are not (supposed to be ;-)
working for at least 15 hours each day, such a simple requirement could
still be achieved with just 3-5 timezones worldwide.

The crudest approach would probably be

  a) N+S America:   use local time of Cuba (~ UT - 5.5 
h)
  b) Europe/Africa/Middle east: use local time of Poland/Greece(~ UT + 1.5 
h)
  c) Asia + Australia:  use local time of Thailand (~ UT + 6.5 
h)

Sure, the hours of darkness would vary substantially within each of
these zones. But they do already *today* for much of the world, thanks
to summer/winder. China understood this a long time ago.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: a system that fails spectacularly

2005-12-07 Thread Markus Kuhn
Rob Seaman wrote on 2005-12-07 13:59 UTC:
http://www.acrelectronics.com/alerts/leap.htm

 Even more remarkably, they proudly proclaim:

 The quality systems of this facility have been registered by UL to
 the ISO 9000 Series Standards.

 So we have a company that manufactures a complete line of safety and
 survival products (!) that are precisely intended to convey UTC as a
 primary function of the devices.  This company claims to have
 followed an international standard focused on achieving quality
 control through best practices in management.

As a general-purpose management standard, ISO 9001 obviously says
nothing about how you have to handle leap seconds. ISO 9001 does not
even specify any particular level of quality. All it does is tell you
how you must document what level of quality you are producing and what
you do to make sure it remains the same for all instances of the same
product.

Customers could in theory asked the company to review their quality
control documentation, and if they had found that no adequate
leap-second test is part of their quality control process, then they
would have known what (not) to expect.

The big problem with the ISO 9000 standards is that they do not require
manufacturers to make all their quality-control procedures easily
downloadable from their web site. As a result, hardly any customer ever
gets a chance to look at all this otherwise perfectly sensible
documentation.

The whole problem with ISO 9001 and friends is that they originated in
the military market. There, customers are far too nervous about their
enemies reading the quality control manuals of their kit. The resulting
secrecy surrounding the ISO 9001 documentation has de-facto rendered the
entire idea utterly useless. It could be easily fixed by adding a
publication requirement to the ISO 9000 certification process, but I
doubt that anyone other than civilian customers would want that. And
these standards are not written by civilian customers.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Leap-second scare stories

2005-07-30 Thread Markus Kuhn
Steve Allen wrote on 2005-07-29 21:37 UTC:
 http://online.wsj.com/article_email/0,,SB112258962467199210-H9je4Nilal4o52nbYCIbq6Em4,00.html

The article repeats an old urban legend:

 In 1997, the Russian global positioning system, known as
 Glonass, was broken for 20 hours after a transmission to the country's
 satellites to add a leap second went awry.

This contradicts statements found in the GLONASS operational bulletin
quoted on

  http://www.mail-archive.com/leapsecs@rom.usno.navy.mil/msg00086.html

The second scare story is:

 And in 2003, a leap-second
 bug made GPS receivers from Motorola Inc. briefly show customers the
 time as half past 62 o'clock.

It conveniently omits the minor detail that this long preannounced
Motorola software bug actually manifested itself on 27 November 2003
and was not in any way caused by an added leap second, but by an
unwise design choice in the GPS data format and a resulting counter
overflow.

So I wonder, how much factual substance there really is behind the
claim

 On Jan. 1, 1996, the addition of a leap second made computers at
 Associated Press Radio crash and start broadcasting the wrong taped
 programs.

It seems to go back to a very anecdotal second-hand remark by Ivars
Peterson in

  http://catless.ncl.ac.uk/Risks/17.59.html#subj1

which got quoted by Peter Neumann in

  ACM SIGSOFT Software Engineering Notes, March 1996, p.16
  http://doi.acm.org/10.1145/227531.227534

and was later only slightly elaborated by Peterson in

  http://www.maa.org/mathland/mathland_7_21.html

where he admits that he never could find out precisely why the
problem had occurred and who was responsible for it.

I'm sorry, but I find these three badly documented second or
third-hand rumours of leap-second scare stories neither very scary nor
very convincing.

Perhaps people should try to invent UTC leap-hour scare stories for a
change. They should be at least 3600x more disruptive!

   Stardate 2651-12-31T24:08:16Z, Captain's log. About eight
   minutes ago, we experienced a sudden and entirely unexpected
   catastrophic failure in all our computers that forced us to
   abandon ship. We had just returned from a 6-year deep space
   assignment and entered a geostationary orbit over the Atlantic
   (39 degrees west), when all of a sudden the ship's primary and
   all backup clock networks failed, just as we reconnected to the
   Internet. A warp-core breach is now immanent and my science
   officer predicts that the resulting overwhelming
   electromagnetic pulse will instantly destroy all computers
   located on planet Earth between longitudes 126 degrees west and
   48 degrees East; most of the Western hemisphere.

 Ending leap seconds would make the sun start rising later and later by
 the clock -- a few seconds later each decade. To compensate, the U.S.
 has proposed adding in a leap hour every 500 to 600 years, which
 also accounts for the fact that the Earth's rotation is expected to
 slow down even further. That would be no more disruptive than the
 annual switch to daylight-saving time, said Ronald Beard of the Naval
 Research Laboratory, who chairs the ITU's special committee on leap
 seconds and favors their abolishment. It's not like someone's going
 to be going to school at four in the afternoon or something, he said.

It introduces leap hours into a time scale (UTC) that is so widely
used in computer networks exactly *because* (unlike civilian local
time) it is free of any disruptive DST leap hours!

Let's not forget that this proposal is all about replacing a
reasonably frequent minor distruption (UTC leap seconds) with a very
rare catastrophically big one (UTC leap hours).

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: Time after Time

2005-01-24 Thread Markus Kuhn
John Cowan wrote on 2005-01-23 18:37 UTC:
 Markus Kuhn scripsit:

  UTC currently certainly has *no* two 1-h leaps every year.

 There seems to be persistent confusion on what is meant by the term
 leap hour.

Why?

 I understand it as a secular change to the various LCT offsets,
 made either all at once (on 1 Jan 2600, say) or on an ad-lib basis.

No. A UTC leap hour is an inserted 60-minute repeat segment in the UTC
time scale, which starts by jumping back on the UTC time scale by one
hour. This has been proposed by BIPM in Torino to be done for the first
time to UTC in about 2600, instead of doing the about 1800 leap seconds
that would be necessary under the current |UTC - UT1|  900 ms until
then. The proposed UTC leap hour simply means that the definition of
UTC is relaxed to (something like) |UTC - UT1|  59 min, and the size of
the adjustment leap is increased accrodingly from 1 s to 3600 s.

Local civilian times are of no convern to ITU, as they are entirely the
responsibility of numerous national/regional arrangements.

 You seem to be using it in the sense of a 1h secular change to universal
 time (lower-case generic reference is intentional).

I can't understand what could be ambiguous here. A leap hour means to
turn a clock forward or backward by an hour. We have done it twice a
year in many LCTs. The BIPM suggested in Torino that we should do it
every couple of hundred years to UTC as well, which would become
permissible by going from the rule |UTC - UT1|  900 ms to a relaxed
rule such as |UTC - UT1|  59 min.

The term leap hour does in no way imply what time zone/scale we are
talking about, and in this context we are talking mostly about UTC.

[How a UTC leap hour would affect LCTs is up the maintainers of the
these LCTs. Since the LTCs currently in use have their leap hours on
many different days of the year, a UTC leap hour would mean that at
least some LCTs would have three leap hours in that year. This could
only be avoided if all LCTs would agree to do their DST leaps
simultaneously with the UTC leap.]

In summary: There are basically three proposals on the table:

  a) Keep UTC as it is (|UTC - UT1|  900 ms) and just make TAI more
 widely available in time signal broadcasts

  b) Move from frequent UTC leap seconds to far less frequent UTC leap
 hours, by relaxing the UTC-UT1 tolerance (e.g., |UTC - UT1|  59 min)

  c) Remove any future leap from UTC, such that UTC becomes TAI plus a fixed
 constant (i.e., |UTC - UT1| becomes unbounded and will start to grow
 quadratically). In this scenario, LCTs would have to change their
 UTC offset every few hundred years, to avoid day becoming night
 in LCTs.

My views:

  a) is perfectly fine (perhaps not ideal, but certainly workable)

  b) is utterly unrealistic and therefore simply a dishonest proposal
 (UTC is so popular today in computing primarily because it is
 *free* of leap hours)

  c) I could live with that one, but what worries me is that
 it will create a long-term mess in a few millenia, when
 |UTC-LCT|  1 day. I am annoyed that this long-term mess and solutions
 around it are not even being discussed. (My hope would have rested
 on resolving the |UTC-LCT|  1 day problem by inserting leap
 days into the LCTs every few thousand years as necessary, to keep
 |UTC-LCT|  36 hours this way, and that these leap days in LCTs could
 perhaps be the same that may be necessary anyway every few
 millenia to fix the remaining Easter drift in the Gregorian
 calendar:
 http://www.mail-archive.com/leapsecs@rom.usno.navy.mil/msg00206.html )

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Re: Time after Time

2005-01-23 Thread Markus Kuhn
Poul-Henning Kamp wrote on 2005-01-23 09:00 UTC:
 any leap
 hours that prevented this would, if ever implemented, be even more
 traumatic than leap seconds are now.

 they already happen here twice a year, and by now even
 Microsoft has gotten it right.

OBJECTION, your Time Lords!

UTC currently certainly has *no* two 1-h leaps every year. What the
witness tries here is merely a poor attept to confuse the jury. He
muddles the distinction between local civilian time, which we all know
is entirely subject to our politicians deep-seated desires to manipulate
us into getting out of bed earlier in summer, and UTC, which is what all
modern computers use internally for time keeping today, below the user
interface, where a 1-h leap is entirely unprecedented and uncalled for.

[By the way, and for the record, may I remind the jury that the quoted
Microsoft *is* actually the one large operating-system vendor who still
has not quite yet gotten it right, as all Windows variants still
insist on approximating in the PC BIOS clock LCT instead of UTC.
Rebooting during the repeat hour after DST *will* corrupt your PC's
clock. Gory details: http:// www.cl.cam.ac.uk/~mgk25/mswish/ut-rtc.html ]

 In addition to being historically unprecedented, such a move would be
 illegal in the United States and some other countries, which have
 laws explicitly defining their time zones based on solar mean time,
 unless such laws were changed.

 The laws, wisely, do not say how close to solar mean time, and parts
 of USA already have offsets close to or exceeding one hour anyway.

As Ron Beard said wisely in his opening address in Torino, laws can be
changed fairly easily, and this discussion should certainly not be about
reinterpreting *past* legislation. Instead, it should be entirely about
making a scientific, technical, and practical recommendation for
*future* legislation.

If you read, just one example, to deviate a bit from the overwhelmingly
US/UK-centricism of this legal argument, the relevant German legislation,

  http://www.cl.cam.ac.uk/~mgk25/time/zeitgesetz.en.html

then you will find that it consists at the moment simply of a pretty
exact technical description of UTC. In other words, it follows exactly
the relevant ITU recommendation! If the ITU recommendation were changed,
for a good cause and with wide international consensus, I have little
doubt that the German parliament and pretty much every other parliament
would be sympathetic and update the national legislation accordingly.
German laws are already updated almost each time the BIPM revises some
aspect of the SI. Countries update their national radio interference and
spectrum management legislation regularly based on the international
consensus that is being negotiated within the ITU. The US and UK are
actually no different from that, except that the subtle differences
between GMT and UTC have escaped political attention in these two
countries so far, and as a result, they still have a technically rather
vague definition of time in their law books, and leave in practice all
the details up to the Time Geeks as USNO, NPL, etc.

If you think that discussions within the ITU should feel constrained by
the legislation of individual member countries, as opposed to setting
guidelines for future legislation there, then you have simply
misunderstood the entire purpose of the process.

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Re: ITU Meeting last year

2005-01-20 Thread Markus Kuhn
[EMAIL PROTECTED] wrote on 2005-01-19 20:19 UTC:
 A resolution was proposed to redefine UTC by replacing leap seconds by leap
 hours, effective at a specific date which I believe was something like 2020.

Thanks for the update!

Did the proposed resolution contain any detailed political provisions
that specify, who exactly would be in charge of declaring in about six
centuries time, when exactly the first UTC leap hour should take place?

Will IERS send out, twice a year, bulletins for the next *600 years*,
announcing just that UTC will continue as usual for the next 6 months?
Not the most interesting mailing list to be on ...

And when the day comes, will people still recognize the authority of
IERS and ITU in such matters? Keep in mind that the names, identities,
and structures of these instritutions will likely have changed several
times by then. Also keep in mind that any living memory of the last UTC
leap will then have been lost over twenty generations earlier. The
subject won't get any less obscure by making the event a 3600x more rare
occasion.

If this proposal gets accepted, then someone will have to shoulder the
burden and take responsibility for a gigantic disruption in the
global^Wsolar IT infrastructure sometimes around 2600. I believe, the
worry about Y2K was nothing in comparison to the troubles caused by a
UTC leap hour. We certainly couldn't insert a leap hour into UTC today.

In my eyes, a UTC leap hour is an unrealistic phantasy.

Judging from how long it took to settle the last adjusting disruption of
that scope (the skipping of 10 leap days as part of the Gregorian
calendar reform), I would expect the UTC leap hour to become either very
messy, or to never happen at all. Who will be the equivalent of Pope
Gregory XIII at about 2600 and where would this person get the authority
from to break thoroughly what was meant to be an interrupt-free computer
time scale. Even the, at the time, almightly Catholic Church wasn't able
to implement the Gregorian transition smoothly by simply decreeing it.

Do we rely on some dictator vastly more powerful than a 16-th century
pope to be around near the years 2600, 3100, 3500, 3800, 4100, 4300,
etc. to get the then necessary UTC leap hour implemented?

Remember that UTC is used today widely in computers first of all because
it *lacks* the very troublesome DST leap hours of civilian time zones.
Most of the existing and proposed workarounds for leap seconds (e.g.,
smoothing out the phase jump by a small temporary frequency shift) are
entirely impractical for leap hours.

Please shoot down this leap-hour idea. The problem is not solved by
replacing frequent tiny disruptions with rare catastrophic ones. It is
hardly ethical to first accept that a regular correction is necessary,
but then to sweep it under the carpet for centuries, expecting the
resulting mess to be sorted out by our descendents two dozen generations
later on.

Leap hours are 3600 more disruptive than leap seconds!

If ITU wants to turn UTC into an interrupt-free physical time scale
decoupled from the rotation of the Earth, then it should say so
honestly, by defining that UTC will *never* ever leap in any way,
neither by a second, nor by an hour.

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Re: ITU Meeting last year

2005-01-20 Thread Markus Kuhn
Clive D.W. Feather wrote on 2005-01-20 12:34 UTC:
  A resolution was proposed to redefine UTC by replacing leap seconds by leap
  hours, effective at a specific date which I believe was something like 
  2020.

 I may be wrong here, but I thought the leap hour idea did *not* insert a
 discontinuity into UTC.

I think, the phrase to redefine UTC by replacing leap seconds by leap
hours can only mean going from

  |UTC - UT1|  1 s

to something like

  |UTC - UT1|  1 h

(or some other finite |UTC - UT1| bound like that).

That was certainly the idea of the BIPM proposal presented at the Torino
meeting.

 Rather, in 2600 (or whenever it is), all civil
 administrations would move their local-UTC offset forward by one hour,
 in many cases by failing to implement the summer-to-winter step back.

Such a proposal would be called to redefine UTC by eliminating future
leaps (i.e., by establishing a fixed offset between UTC and TAI). It
seems perfectly practical, at least as long as |UTC - UT1|  24 h
(i.e., for the next 5000 years).

What local governments with regional civilian time zones do is outside
the influence of the ITU. But if leap seconds were eliminated from UTC
and a fixed TAI-UTC offset defined instead, then what you describe above
is indeed what I would expect to happen with most of them. Unless we
give up the notion of local time zones entirely, there would be a clear
need to keep them locked to UT1 + offset to within an hour or so.

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Re: TAI-UT1 prediction

2005-01-20 Thread Markus Kuhn
Tom Van Baak wrote on 2005-01-20 17:33 UTC:
 No one can know for sure but I was wondering if
 there is a consensus on when the first leap hour
 would occur?

A good table summary of some projections is in

  http://www.ucolick.org/~sla/leapsecs/dutc.html#dutctable

and other discussions are on

  http://www.ien.it/luc/cesio/itu/ITU.shtml

and there in particular in

  Prediction of Universal Time and LOD Variation - D. Gambis and C. Bizouard, 
(IERS)
  http://www.ien.it/luc/cesio/itu/gambis.pdf

 Even to an order of magnitude? I
 ask because the above document draft says at
 least 500 years while others here cite numbers
 like 600 years, or 5000 years.

5000 years until the next leap second sounds like someone got some very
basic maths wrong (by then, a whole leap day would be due), the other
two figures sound feasible. Perhaps a confusion about the rotation of
earth (UT1 clock frequency) slowing down roughly linearly, therefore the
accumulation of the phase difference being (after integrating) an
essentially quadratic function?

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


ITU-R SRG7A Turin colloquium proceedings available online

2004-02-24 Thread Markus Kuhn
The proceedings and final report of the ITU-R SRG7A Colloquium on the
UTC Timescale, Turin, Italy, 28-29 May 2003, have now appeared online at
the IEN web site:

  http://www.ien.it/luc/cesio/itu/ITU.shtml

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: GPS versus Galileo

2004-02-15 Thread Markus Kuhn
Steve Allen wrote on 2004-02-14 21:53 UTC:
 Or maybe Galileo will do its signal format right, and allow at least
 16 bits in the field that gives the difference between TAI and UTC.
 That would last for at least 2800 years, which is plenty of foresight.

 24 bits wouldn't hurt, and would last for at least 44000 years, by which
 date mean solar time would need one leap second per day.  Presumably
 by that time humanity will have come up with a better idea.

Modern data formats are a bit more sophisticated than that. Designers
today try to avoid fixed-width fields where possible. For example, even
if you use the old ASN.1 BER syntax [1], which has been widely used in
computer communication protocols since the mid 1980s, an integer is
automatically encoded as a variable-length sequence of bytes, and in
each byte, 7 bits contribute to the number while the most-significant
bit signals whether there is another byte following.

So you have the three byte sequence

  1DDD , 1DDD , 0DDD 

to encode the signed 21-bit value D     
(-2^20..2^20-1). (BTW, what ASN.1 BER actually does is to prefix any
integer value with a length indicator that is encoded in the way above.)

The GPS signal format has been virtually unchanged since prototype
experiments in the early 1970s, when microprocessors became just
available [2]. Galileo will have a higher data rate than GPS and the
protocol format designers can comfortably assume that a 32-bit RISC
microcontroller running at 50 MHz clock frequency is the least that any
Galileo receiver will have on offer; the equivalent of an early 1990s
desktop workstation, which you find today in any lowest-cost mobile
phone. The use of variable-length number formats adds hardly any cost
and leaves it at the discretion of the operator to fine-tune later with
what exact precision and range to broadcast data.

Markus


[1] ISO/IEC 8825, Information technology -- ASN.1 encoding rules.

[2] B.W. Parkinson and J.J. Spilker Jr.: Global Positioning System:
Theory and Applications -- Volume I, Progress in Astronautics and
Aeronautics, Volume 163, American Institute of Aeronautics and
Astronautics, Washington DC, 1996.

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Re: GPS versus Galileo

2004-02-05 Thread Markus Kuhn
Steve Allen wrote on 2003-12-23 19:46 UTC:
 Of course no agreement can stop some entity from flying a jamming
 rig for both systems over particular theatres of interest.

Robustness against U.S. navigation warfare was one of the main funding
rationales for Galileo. The U.S. are unlikely to jam easily their own
military (M-code) navigation signal. As I understood it, the original
plan for Galileo was to put its own Public Regulated Signal (PRS) into
the spectrum in a way such that the U.S. cannot jam it without jamming
their own M-code as well. This improves robustness against adverse US
DoD capabilities and also simplifies tremendously the design of
receivers that can listen to both GPS and Galileo (which I expect will
be all new receivers as soon as Galileo is up and running).

Status of Galileo Frequency and Signal Design:
http://europa.eu.int/comm/dgs/energy_transport/galileo/doc/galileo_stf_ion2002.pdf
http://www.gpsworld.com/gpsworld/article/articleDetail.jsp?id=61244

Status of new GPS M-code design:
http://www.mitre.org/work/tech_papers/tech_papers_00/ betz_overview/betz_overview.pdf

DoD versus EU battle:
http://www.globalsecurity.org/org/news/2002/020514-gps.htm

From a recent local press review:

-
EU and US fail to agree on interoperability of satellite navigation systems

Discussions between the European Union and the US in Washington
concerning the interoperability of the EU's proposed Galileo satellite
navigation system and America's existing GPS service have ended without
agreement, according to reports in the New Scientist.

The sticking point is said to be the standard signal that the EU would
like to use for Galileo. Europe's preferred option, known as binary
offset carrier (BOC) 1.5, 1.5, would give users of Galileo the most
accurate information possible, but the US argues that this would
interfere with the GPS system's proposed new encrypted military signal.

The US intends to introduce the new signal, known as the M-code, in
2012. During a military conflict, the US would attempt to jam all
civilian satellite systems so as not to allow enemies to use satellite
navigation. But jamming Galileo's BOC 1.5, 1.5 signal, argue US
officials, would also disrupt its own M-code.

The US proposes that Galileo uses an alternative signal, such as BOC
1.1, which does not overlap the M-code signal, but the EU is concerned
that this will result in a less accurate system for commercial users of
Galileo.

Officials from the EU and the US will meet later in February to try to
resolve the issue.

For further information on Galileo, please consult the following web
address:

http://europa.eu.int/comm/dgs/energy_transport/galileo/index_en.htm
-

The use of the word interoperability for the feature that the operator
of one system can jam the other one without affecting its own has a neat
Orwellian ring to it.

From what I hear behind the scenes, plans for Galileo are now to make
the transmitter and receiver designs highly flexible, such that code
rates, spreading sequences, BOC offsets, and perhaps even carrier center
frequencies can be reprogrammed smoothly on-the-fly while the system is
in operation, to be able to adapt to adverse actions and the current
political climate. Apart from moving the center frequency around
significantly (which clearly affects the design of the RF hardware very
much on each end), most of the remaining DSP and PLL parameters can
today quite easily be made reconfigurable in software at little extra
cost.

We may consider our deliberations on leap second rather abstract and
academic here, but outside the ivory tower, the reliable distribution of
nanosecond-accuracy timing signals has meanwhile become not only a
military concern, but also the topic of a serious turf fight between the
Pentagon and the EU Commission.

Is seems the Temporal Cold War has begun ...

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Re: name the equinox contest on now

2004-01-29 Thread Markus Kuhn
Steve Allen wrote on 2004-01-29 00:13 UTC:
 While the new paradigm of celestial coordinates is rigorously
 defined in terms of mathematics, it is lacking in a common
 terminology. [...]

 http://syrte.obspm.fr/iauWGnfa/

 [...] the fact is that it is
 difficult to make sense of the proposals without familiarity
 with the past 20 years of literature on coordinate systems.

As a layperson with a good background in mathematics and physics and no
fear of dealing with exact definitions relating to multiple frames of
reference, I tried a couple of times to understand from available online
sources and almanac commentaries the state of the art in astronomic and
terrestrial coordinate systems. I failed each time miserably, thanks to
the -- in my view -- rather inpenetrable use of obscure terminology and
circular definitions.

If someone knows of an introductory tutorial that describes the exact
definition of modern celestial and terrestrial coordinate systems,
without assuming knowledge of any terms other than those of linear
algebra and good high-school-level astronomy, I would be most greatful
for a pointer.

If no such thing exists, then perhaps one of the gurus in the field
might be interested in writing such a tutorial for non-astronomers?
Something comparable to McCarthy's Astronomical Time in Proc. IEEE
79(7)915-920?

Writing such a self-contained tutorial that presents the modern
definitions of earth and space coordinate systems independent of the
past 20 years of literature might also be a valueable exercise towards
coming up with a neat and clean terminology that is free of the
accumulated historic ballast that the current terminology in this field
seems to suffer from.

Perhaps the modern definition of earth and space coordinate systems is
now even ripe for being written up as an ISO standard? The editorial
guidelines of the International Standards Orgainzation strongly
encourage the careful authoring of entirely self-contained
specifications that are practically free of undefined or circular
terminology. So this might be another very useful exercise towards
making this work more accessable and therefore useable by a much larger
community.

Just a thought ...

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


More media coverage: Der Spiegel 2003-08-25

2003-08-27 Thread Markus Kuhn
Last Monday's (2003-08-25) edition of the German news magazine Der
Spiegel (Nr. 35/2003) has an article by Manfred Dworschak on ITU SRG 7A
and the leap second debate on pages 94-95.

http://www.spiegel.de/spiegel/0,1518,262918,00.html

A bibliography of previous media is growing on my leap seconds
link farm:

  http://www.cl.cam.ac.uk/~mgk25/time/leap/

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: mining for data about time

2003-08-15 Thread Markus Kuhn
Steve Allen wrote on 2003-08-15 05:52 UTC:
 Is anyone looking into providing these data as XML?

What benefits would a monster such as XML add here, apart from adding a
rather baroque syntax to otherwise fairly easy to read and parse flat
table data?

Instead of as XML, you probably mean in a well-specified file
format. There are many ways to specify file formats, and XML is
arguable one of the more ugly and difficult to use choices on the
market, especially if there is nothing structurewise in your data that
warrants the use of anything more complex than a regular expression
grammar (the simplest level of the Chomsky hierarchy).

[Or to rephrase the late Roger Needham: If you think XML is the solution to
your problem, you probably have neither understood XML, nor your problem.]

If someone wants to specify a nicer EOP file format, please use some
very simple single-record-per-ASCII-line syntax (e.g., comma separated
values, etc.) that can be parsed trivially with a simple single-line
Perl or Awk regular expression.

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Re: preferred months for leap seconds

2003-08-09 Thread Markus Kuhn
Steve Allen wrote on 2003-08-09 05:46 UTC:
 Also note the interesting content of this message from 1988
 http://www-mice.cs.ucl.ac.uk/multimedia/misc/tcp_ip/8801.mm.www/0022.html
 This purports to explain why June leap seconds became preferred to
 December leap seconds.

The explanation given has all the hallmarks of someone having told a
joke, and someone else didn't get it and believed it was a true story.
Assuming the French refers to IERS, then all they have to do to
insert a leap second is to send out a Bulletin C, but that happens
already many months (typically siz) in advance. Therefore, nobody at
IERS has any reason to be missing their New Year's Eve party year after
year in order to insert a leap second, because all their work has
already been done half a year earlier.

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


German Time Act of 1978, now available in English

2003-07-30 Thread Markus Kuhn
Having observed that the legal discussions here center so exclusively on
the time legislations of merely two countries (GB, US), I just felt the
sudden urge to make available a proper English translation of the German
Time Act of 1978 in its current revised form:

  http://www.cl.cam.ac.uk/~mgk25/time/zeitgesetz.en.html

As you can see, German law provides a fairly detailed independent
definition of what Coordinated Universal Time is (namely UTC-GMST at
1972-01-01 was +0.04 s, UTC ticks based on the SI second at sea level,
|UTC-GMST|  1 s). It delegates the remaining technical details (how
does one implement the SI second, when exactly shall leap seconds
happen, how is time published, etc.) to the time experts at PTB. Neither
ITU nor IERS are mentioned there. German law considers it sufficient if
the relevant experts at the competent federal agency simply know about
such things and and discuss the remaining technicalities with their
international counterparts via the usual scientific communication
channels.

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Re: DRM broadcast disrupted by leap seconds

2003-07-19 Thread Markus Kuhn
Ed Davies wrote on 2003-07-19 09:15 UTC:
  When the scheduled transmission time
  arrives for a packet, it is handed with high timing accuracy to the
  analog-to-digital converter,

 I assume you mean digital-to-analog.

Yes, sorry for the typo.

 This also raises the point that because the transmission is delayed a few
 seconds for buffering there is presumably a need for the studio to work
 in the future by a few seconds if time signals are to be transmitted
 correctly.

All modern digital broadcast transmission systems introduce significant
delays due to compression and coding. It is therefore common practice
today that the studio clocks run a few seconds (say T = 10 s) early, and
then the signal is delayed by digital buffers between the studio and the
various transmitter chains for T minus the respective transmission and
coding delay. This way, you can achieve that both analog terrestrial and
digital satellite transmissions have rather synchronous audio and video.
Otherwise, your neigbor would already cheer in from of his analogue TV
set, while you still hear on DRM the live report about the football
player aproaching the goal.

There are a couple of problem though with delayed live:

  - One is with the BBC. They insist for nostalgic reasons to transmit
the Big Bang sound live, which cannot be run 10 seconds early in
sync with the studio clock.

  - Another are live telephone conversations with untrained members of the
radio audience who run a loud receiver next to the phone. The delay
eliminates the risk of feedback whisle, but it now ads echo and
human confusion. The former can be tackled with DSP techniques, the
latter is more tricky.

  - The third problem is that in the present generation of digital
radio receivers (DAB, DRM, WorldSpace, etc.), the authors of the
spec neglected to standardize the exact buffer delay in the receiver.

For the last reason mostly, the time beeps from digital receivers still
have to be used with great caution today (or are even left out by some
stations who prefer to send none rather than wrong ones).

  Either having a commonly used standard time without leap seconds (TI),
  or having TAI widely supported in clocks and APIs would have solved the
  problem.

 Absolutely - and the second suggested solution doesn't need to take 20
 years to be implemented.

The engineer involved in this project to whom I talked was actually very
familiar with my API proposal on

  http://www.cl.cam.ac.uk/~mgk25/time/c/

and agreed that the problem never had come up if that had been widely
supported by Linux, NTP drivers, and GPS receiver manufacturers. But we
are not there yet.

The current discussion on removing leap seconds no doubt also will delay
efforts to make TAI more widely available, because what is the point in
improving the implementations if the spec might change soon
fundamentally.

I don't care much weather we move from UTC to TI, because both
approaches have comparable advantages and drawbacks, which we understand
today probably as good as we ever will. But it would be good to make a
decision rather sooner than later, because the uncertainty that the
discussion generates about how to design new systems developped today
with regard to leap seconds can be far more hassle. It would be
unfortunate if at the end of this discussion we change nothing and all
we have accomplished is to delay setting up mechanisms to deal with leap
seconds properly. I personally feel certainly not motivated to press
ahead with proposals for handling leap seconds better, if there is a
real chance that there might soon be no more of them.

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


DRM broadcast disrupted by leap seconds

2003-07-18 Thread Markus Kuhn
 and APIs would have solved the
problem.

Markus

--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain


Re: religious concerns

2003-07-12 Thread Markus Kuhn
Steve Allen wrote on 2003-07-12 00:56 UTC:
 On Fri 2003-07-11T12:11:10 -0700, Rob Seaman hath writ:
  It may be that not a single religious sect anywhere on
  the globe will care about the secularization (pun intended) of the
  world's clocks.

 Profuse excuses begged for entering pedant mode, but
 I offer these folks as a likely counterexample

 http://www.sabbatarian.com/Dateline.html

It seems, the true quarrel of this particular community is more with the
Earth not being flat any more (as it obviously was when the Old
Testament was written) ...

http://www.flat-earth.org/
http://www.cca.org/woc/felfat/

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Some personal notes on the Torino meeting

2003-06-03 Thread Markus Kuhn
), the
agenda moved to writing up a draft conclusion of the colloquium, which
was then to be refined and phrased out more carefully by the
invitation-only SRG meeting on Friday.

Ron Beard with William Klepczynski drafted in PowerPoint on the
presentation laptop a list of objectives and conclusions for the
meeting. They started out with a few very pro-change statements, that
quickly attracted criticism from the audience as perhaps not being a
quite adequate reflection of the discussion at the colloquium.
Throughout the subsequent discussion, I had the impression that they
were rather happy to include pro-change arguments and statements that
were proposed by participants into the draft, but were very reluctant to
include any of the more sceptical/conservative statements that were, as
far as I could tell, proposed equally often. In the following coffee
break, a number of participants noted on their impression that the
organizers of the colloquium probably had already made up their mind on
the death of UTC and would push this through ITU in any case.

The concluding session was interrupted by a tour through IEN's time and
frequency labs, where we saw a caesium-fountain prototype, the Italian
master clock, as well as IEN's work on setting up a demonstration ground
segment for the Galileo project that at present uses the existing GPS
space segment.

During the tour, a number of other SRG members drafted a revised
conclusion that started with the much more cautious statement that there
was no overwhelming consensus on the need for replacing UTC with a
uniform time scale, which I felt much more happy with. When the meeting
reconvened, some minor changes were made on the included suggestion to
switch from UTC to a leap-second-free TI at some point. For instance,
Patrick Wallace insisted on putting 2022 as a suggested deadline in, to
allow plenty of time for existing systems to go out of service before
any changes take effect.

The SRG was going to meet on Friday morning to revise and flesh out this
concluding recommendation. I left Italy already Thursday evening and I
hope we will see the final result here sometimes this or next week.

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__


Re: NASA GMT vs UTC

2003-02-16 Thread Markus Kuhn
Tom Van Baak wrote on 2003-02-16 02:25 UTC:
 http://www.spaceflight.nasa.gov/shuttle/investigation/timeline/index.html

 I've been reading a lot of NASA pages on the shuttle
 recently and was reminded once again that NASA
 seems to use GMT instead of UTC to label their
 timelines. Do any of you know why?

Probably mostly for US domestic media convenience. :-( The average US
journalist might be slightly more likely to have heard of GMT than of
Universal Time.

[Similarly, these pages use Flintstone units everywhere (Fahrenheit
instead of Celsius for temperatures; feet instead of meters; etc.)]

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__



Re: What problems do leap seconds *really* create?

2003-01-30 Thread Markus Kuhn
John Cowan wrote on 2003-01-30 13:01 UTC:
 Markus Kuhn scripsit:

  Unix timestamps have always been meant to be an encoding of a
  best-effort approximation of UTC.

 Unix is in fact older than UTC.

This is getting slightly off-topic, but Unix slowly evolved and was
reimplemented various times during the first half of the 1970s and the
early versions probably didn't have a clock. It didn't exist in practice
outside Bell Labs before 1976. Gory details are on:

  http://www.bell-labs.com/history/unix/firstport.html

  They have always counted the non-leap seconds since 1970-01-01.

 The Posix interpretation is only a few years old, and a break with Unix
 history.  Before that, time_t ticked SI seconds since the epoch (i.e.
 1970-01-01:00:00:00 GMT = 1970-01-01:00:00:10 TAI).

Sorry, you just make this up. Unix machines ticked the seconds of their
local oscillator from boot to shutdown. Local oscillator seconds differ
from SI seconds by typically +/- 10^-5 s or worse. Unix time had
multiple or fractional inserted and deleted leap seconds whenever the
administrator brutally readjusted the local clock using the
settimeofday(2) system call closer to UTC. Only much later in the 1980s
did the Berkeley Unix version add the adjtime(2) system call to allow
smooth manual adjustment towards UTC by changing the length of the Unix
second relative to the local oscillator second by IIRC up to 1%. The
entire question of the relation of Unix time to UTC and TAI came only up
at roughly the same time as POSIX in the late 1980s when people started
to get interested in time synchronization over LANs and (in Europe)
DCF77 radio receivers.

 The time(2) man
 page in the Sixth Edition (unchanged in the Seventh) of Research
 Unix says:

 .I Time
 returns the time since 00:00:00 GMT, Jan. 1, 1970, measured
 in seconds.

Today we distinguish between civilian (UTC non-leap) seconds and
physical (SI) seconds. The authors of that manual very obviously didn't
make that distinction and you should not misrepresent them by claiming
that they did.

 IOW, it is a count of elapsed time since a certain moment, measured in
 SI seconds, and not an encoding of anything.

In practice, the source code shows that time_t values are converted to
UTC clock displays without a leap second table, therefore they clearly
are just meant to be an encoding of UTC clock display values and nothing
else. Implementations that do anything else are purely experimental, not
widely used, and can cause serious disruption in practice.

 Even today, you can install the ADO (and probably GNU) packages in
 either of two ways:  posix, in which there are no leap seconds and
 time_t's get the POSIX interpretation you reference; and right, in
 which there are leap seconds and time_t is a count of seconds.
 Try setting your TZ to right/whatever and see what you get.

The so-called right mode in Olson's timezone library on that makes
time_t an encoding of TAI+10s instead of UTC, as well as Dan Bernstein's
libtai are both commonly regarded to be experimental implementations and
not recommended for general use. I don't know anyone who uses TAI+10s on
Unix in practice and it violates various standards. The reasons for why
it shouldn't be used have been discussed in great detail on Olson's tz
mailing list. You have completely misunderstood Unix time if you think
that the Olson right configuration has anything to do with it.

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__



Re: Telescope pointing and UTC

2003-01-30 Thread Markus Kuhn
Steve Allen wrote on 2003-01-30 20:17 UTC:
 The specifications for the automatic telescope call for an object to
 appear within 10 arcsec of the field center after a slew.  This is
 congruent with what the telescope engineers can do with the flexure
 and hysteresis, but it obviously requires UT1 good to about 0.66 s for
 targets on the equator.  Therefore we do need DUT1, but not to more
 accuracy than it is provided.  Higher cost telescopes may be able to
 demand tighter specifications.

In addition, if you have a readily aligned telescope, DUT1 to 100 ms
should be more than exact enough to locate a bright guide star. Then let
the system make a quick CCD exposure of that and derive DUT1 with the
needed precision by looking at the coordinates of the brightest peak on
this image.

Even amateur equipment with CCD tracker does all that today fully
automatically, including the figuring out the telescope's alignment:

  http://www.meade.com/catalog/lx/8_10_lx200gps.html

In the various surveys among professional observatories that have been
reported here, have the manufacturers of microprocessor-controlled
amateur telescopes (which today typically come with integrated GPS
receivers) been asked, what |UT1-UTC|  0.9 s would means for the many
thousands of systems that they have already sold?

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__



Re: Leap seconds in the European 50.0 Hz power grid

2003-01-30 Thread Markus Kuhn
Steve Allen wrote on 2003-01-30 20:58 UTC:
 On Thu 2003-01-30T12:54:09 +, Markus Kuhn hath writ:
  The UCPTE specification says that the grid phase vectors have to rotate on
  long-term average exactly 50 * 60 * 60 * 24 times per UTC day.

 Obviously the grid frequency shift after leap seconds is annoying, and
 it is undoubtedly one of the reasons contributing to the notion of
 stopping leap seconds.

I doubt that this is really the case. UCPTE is happy if it can guarantee
that the grid time remains within 20 seconds of UTC. Leap seconds are
only a relatively minor reason for the power grid clock to deviate from
UTC temporarily. Remember that in a national or continental distribution
grid, power is transferred whenever there are phase differences between
parts of the grid. So if demand raises in one area, it will fall behind
in phase relative to the others and thereby it slowly pull the frequency
of the entire grid down until control loops detect this and compensate
the deviation from the target frequency by pulling rods a few
centimeters out of nuclear reactors all across the continent. First you
keep the short-term frequency constant, then you keep the voltage
constant, then you keep the power transfers in line with the contracts,
and only after you have fulfilled all these targets, you use what
degrees of freedom are left in the control space to keep the grid clock
synchronized, i.e the long-term frequency accurate.

 But the question arises as to why the spec
 can't easily be changed to indicate that it is per TAI day.

As long as UTC is as it is currently, you don't want to do this:

Firstly, there are zillions of clocks that use the power grid as their
reference oscillator, and you want them to run locked roughly to UTC,
because they are supposed to display local civilian time and not
something linked to TAI.

Secondly, in Europe, exact UTC-based civilian time was available for a
long time via LF transmitters such as DCF77, MSF, HBG, etc., not to
forget BBC-style beeps before news broadcasts and telephone speaking
clocks. TAI on the other hand has only relatively recently become
reasonably easily available automatically through GPS and NTP extensions
and would otherwise have to be manually looked up from tables. So TAI
was just far less practical, and in addition simply unknown to most
engineers.

My point was that leap seconds are not a problem in the power grid and
for power-grid controlled clocks.

About power-grid controlled clocks:

Around 1990, West Berlin was temporarily connected to what was then the
East European grid into which East Germany was integrated, which did not
provide a grid time that was kept long-term aligned with UTC. Customers
in West Berlin started to complain that their clocks suddenly needed to
be adjusted regularly. If the average frequency for a week was only
49.95 Hz, your alarm clock would go 10 minutes late by the end of the
week, which is definitely noticeable, especially if the same clock
before never needed any adjustment between power outages. The problem
persisted until East Germany (and now also its neighbors) was integrated
into the UCPTE.

 My power company cannot supply me with a reliability of 0.9997, so I can
 never see leap seconds from my household clocks.  I don't really
 believe that other power companies achieve it either

Unfortunately, I can't confirm that my supplier here in Cambridge can
either. However, in the urban centers of Bavaria where I grew up, power
outages where certainly far less frequent than leap seconds. Of the few
we ever had there, most outages were announced a week in advance by mail
because of local network work. I am being told that the North American
power grid does not have a particularly good reputation among
Continental power distribution engineers, so you probabaly shouldn't
assume that its reliability represents a high standard in international
comparison. (E.g., even solar wind has been known to drive transformers
in the US/CA grid into catastrophic saturation and bring the entire grid
to a collapse, something that UCPTE regulations have prevented by
requiring the installation of capacitors that eliminate continental DC
loops).

 So what is the value obtained by a specification like this?

Grid-powered clocks that in practice do not have to be adjusted, for
example. Note that these were long around before DCF77 and GPS receivers
became low-cost items. Even though embedded DCF77 receivers/antennas now
cost less than 15 euros and GPS receivers less than ~50-100 euros, it
still doesn't beat costwise a few 10 Mohm resistors for a voltage
divider directly from the 230 volt line to the spare input pin of a
clock microcontroller.

Plus remember the remarks above that UTC was for a long time far more
easily available than TAI in Europe. Only *very* recent power plants
have GPS receivers in the control system and could therefore use TAI as
a reference in theory, if they wanted. (My brother happens to set up one

Re: What problems do leap seconds *really* create?

2003-01-29 Thread Markus Kuhn
John Cowan wrote on 2003-01-29 17:56 UTC:
 The problem is that they are not announced much in advance, and one needs
 to keep a list of them back to 1972 which grows quadratically in size.

Is this a real problem?

Who really needs to maintain a full list of leap seconds and for what
application exactly?

Who needs to know about a leap second more than half a year in advance
but has no access to a time signal broadcasting service (the better ones
of which all carry leap second announcement information today)?

For pretty much any leapsecond-aware time-critical application that I
can think of, it seems more than sufficient to know:

  - the nearest leap second to now
  - TAI-UTC now
  - UT1-UTC now

This information is trivial to broadcast in a fixed-width data format.
(For the nitpicker: The number of bits to represent TAI-UTC admittendly
grows logarithmically as be move away from 1950. We know we can live
with that, as O(log(t)) is equivalent to O(1) for engineering purposes.)

Markus

--
Markus Kuhn, Computer Lab, Univ of Cambridge, GB
http://www.cl.cam.ac.uk/~mgk25/ | __oo_O..O_oo__