Zefram wrote: ... The historical trend is towards using uniform time units. It seems curious now that when the atomic clock was invented astronomers opposed calling it a time standard. Well, it seems curious to everybody except Rob Seaman :-) ... It is much like the ancient Egyptians (IIRC) making the transition from sundials to water clocks. They had always marked out hours on sundials subtending equal angles, so the actual temporal length of the hours varied over the course of the day. When the water clock gave them an independent time reference, they were horrified that its uniform hours didn't match sundial hours. Much technical ingenuity went into mechanical modifications to water clocks to make them accurately emulate what went before. ... Nice analogy. Ed.
Steve Allen wrote: On Mon 2007-01-01T21:19:04 +, Ed Davies hath writ: Why does the One sec at predicted intervals line suddenly diverge in the early 2500's when the other lines seem to just be expanding in a sensible way? ... I suspect that the divergence of the one line indicates that the LOD has become long enough that 1 s can no longer keep up with the divergence using whatever predicted interval he chose. I suspect that the chosen interval was every three months, for it is in about the year 2500 that the LOD will require 4 leap seconds per year. Yes, that make sense. I worked out what LOD increases he'd have to be assuming for one or 6 monthly leaps and neither seemed right. Should have realised that it was in between. Still, it's a strange assumption, given that TF.640 allows, I understand, leaps at the end of any month. Unofficially, the wording seems to be: A positive or negative leap-second should be the last second of a UTC month, but first preference should be given to the end of December and June, and second preference to the end of March and September. Anybody got access to a proper copy and can say whether that's right or not? If it is right then the Wikipedia article on leap seconds needs fixing. As for the other questions, McCarthy had been producing versions of this plot since around 1999, but the published record of them is largely in PowerPoint. Dr. Tufte has provided postmortems of both Challenger and Columbia as testaments to how little that medium conveys. Indeed, this slide hasn't got us much closer to understanding the original problem, namely: what is maximum error likely to be over a decade. Ed.
Warner Losh wrote: The IERS bulletin C is a little different than the ITU TF.460: Leap seconds can be introduced in UTC at the end of the months of December or June, depending on the evolution of UT1-TAI. Bulletin C is mailed every six months, either to announce a time step in UTC, or to confirm that there will be no time step at the next possible date. Unfortunately, these IERS bulletins are dreadfully badly worded and seem to assume current practice rather than fully defining what they mean. E.g., Bulletin C 32, dated 19 July 2006 http://hpiers.obspm.fr/iers/bul/bulc/bulletinc.dat says: NO positive leap second will be introduced at the end of December 2006. So we still don't know officially if there was a negative leap second then and we still don't officially know if there will be a leap second at the end of this month. http://hpiers.obspm.fr/iers/bul/bulc/BULLETINC.GUIDE says: UTC is defined by the CCIR Recommendation 460-4 (1986). It differs from TAI by an integral number of seconds, in such a way that UT1-UTC stays smaller than 0.9s in absolute value. The decision to introduce a leap second in UTC to meet this condition is the responsability of the IERS. According to the CCIR Recommendation, first preference is given to the opportunities at the end of December and June,and second preference to those at the end of March and September. Since the system was introduced in 1972 only dates in June and December have been used. Again, this is the truth but not the whole truth as it doesn't mention the third preference opportunities at the ends of other months - but it'll be a while until those are needed. (Also, they can't spell responsibility :-) Ed.
Rob Seaman wrote: ... Obviously it would take at least N years to introduce a new reporting requirement of N years in advance (well, N years minus six months). Sorry, maybe I'm being thick but, why? Surely the IERS could announce all the leap seconds in 2007 through 2016 inclusive this week then those for 2017 just before the end of this year, and so on. We'd have immediate 10 year scheduling. I suspect it would be exceptionally interesting to everyone, no matter what their opinion on our tediously familiar issues, to know how well these next seven or so leap seconds could be so predicted, scheduled and reported. Absolutely, it would be very interesting to know. I suspect though, that actually we (the human race) don't have enough data to really know a solid upper bound to possible error and that any probability distribution would really be not much more than an educated guess. Maybe a few decades of detailed study has not been enough to see wilder swings - to eliminate the unknown unknowns, if you like. If the 0.9s limit were to be relaxed - how much must that be in practice? Are we arguing over a few tenths of a second coarsening of the current standard? That's a heck of a lot different than 36,000 tenths. Maybe we can turn this question round. Suppose the decision was made to simplistically schedule a positive leap second every 18 months for the next decade, what would be the effect of the likely worst case error? First, what could the worst case error be? Here's my guess. If it turned out that no leap seconds were required then we'd be 6 seconds out. If we actually needed one every nine months we'd be out by about 6 seconds the other way. So the turned around question would be: assuming we are going to relax the 0.9 seconds limit, how much of an additional problem would it be if it was increased by a factor of 10 or so, in the most likely worst case? As Rob has pointed out recently on the list, 1 second in time equates to 15 seconds of arc in right ascension at the celestial equator for telescope pointing. Nine seconds in time is therefore 2.25 arc minutes. For almost all amateur astronomers this error would be insignificant as it's smaller than their field of view with a normal eyepiece but, more importantly, the telescope is usually aligned by pointing at stars anyway rather than by setting the clock at all accurately. For the professionals I'm not so sure but, for context, Hubble's coarse pointing system aims the telescope to an accuracy of about 1 arc minute before handing off control to the fine guidance sensors. For celestial navigation on the Earth, a nine second error in time would equate to a 4.1 km error along the equator. Worth considering. My guess would be that there would be applications which would need to take account of the difference which currently don't. Is it really likely to be a problem, though? Remember that this is not a secular error, by the end of, say, 2009 we'd be beginning to get an idea of how things are going and would be able to start feeding corrections into the following decade. So, while it would be nice to know a likely upper bound on the possible errors, is a back of an envelope guess good enough? Happy perihelion, Ed.
Poul-Henning Kamp wrote: If you have subtle point, I'd love to hear it. Not even close to a subtle point, I simply cannot figure out what the graph shows... Me too. Is this an analysis or a simulation? What are the assumptions? What predicted intervals does he mean? The bullet points above are very confusing as well. What does large discontinuities possible mean? Ignoring any quibble about the use of the word discontinuities, does he mean more than one leap second at a particular event? Why would anybody want to do that? - at least before we're getting to daily leap seconds which is well off to the right of his graph (50 000 years, or so, I think). Why does the One sec at predicted intervals line suddenly diverge in the early 2500's when the other lines seem to just be expanding in a sensible way? Ed.
Steve Allen wrote: On Mon 2007-01-01T17:42:11 +, Ed Davies hath writ: Sorry, maybe I'm being thick but, why? Surely the IERS could announce all the leap seconds in 2007 through 2016 inclusive this week then those for 2017 just before the end of this year, and so on. We'd have immediate 10 year scheduling. For reasons never explained publicly this notion was shot down very early in the process of the WP7A SRG. It would almost certainly exceed the current 0.9 s limit, and in so doing it would violate the letter of ITU-R TF.460. Yes, I was assuming exceeding the 0.9 s limit, as I'm sure the rest of my message made clear. We are discussing this as an alternative to, for all intents and purposes, scrapping leaps altogether and blowing the limit for all time, so I don't see this as a problem. Ed.
Poul-Henning Kamp wrote: In message [EMAIL PROTECTED], Rob Seaman writes: Jim Palfreyman wrote: Just a reminder that UTC has no - none - nada - discontinuities. Various computer mis-implementations may, but the standard is very carefully constructed to avoid spring-forward or fall-back gaps or do- overs. Rob, If you feel uncomfortable with calling leapseconds discontinuities, then we can use the term arrhythmia instead. If we assume that every month has 30 days and obtain a day number by multiplying the month number by 30 and adding the day in month (call this the SDN - Silly Day Number) and then look at SDN-MJD (modified Julian day number) we would see discontinuities. The only way to see discontinuities in UTC-TAI is by making an equally silly assumption in numbering the seconds of UTC: assuming all UTC minutes are 60 seconds or, equivalently, all UTC days are 86 400 seconds. The unfortunate thing is that people actually do think of it this way. E.g.: http://hpiers.obspm.fr/eop-pc/earthor/utc/TAI-UTC_tab.html The whole idea of the expression UTC-TAI being meaningful and evaluating to a number of seconds is a convenient but rather sloppy shorthand. Any strongly typed programming language ought to give a type error on that expression. UTC times of day are variable radix - in just the same way as days and months are in the Gregorian calendar. Except, of course, that the Gregorian calendar is fixed and completely predicable. I have an awful lot of sympathy for the idea of making leap seconds predictable over longer periods, even if it risks UTC-UT1 becoming larger than at present allowed. Ed.
Rob Seaman wrote: I'm given to wonder how much of the friction on this mailing list is simply due to the shortcomings in the technology that implements it. I've appended a message I sent in August with four plots attached. Can someone tell me whether it is readable now or was successfully delivered back then? I rummaged around on the list archive and on archives accessibly via google and find no copy of this message that survived the communications medium. In Thunderbird on Ubuntu Linux it looked fine in both your original post and the repeat you attached - so any problems are down to the reader and not the transmission, I think. Ed.
Rob Seaman wrote: ... An amateur astronomer with a Celestron, the Astronomical Almanac and an atlas can recover UTC anywhere on Earth. ... Do you really mean UTC here? I can see that an amateur with a Celestron could recover UT (for some flavour of UT, I'm not sure which - UT0?, then presumably UT1 after traveling around a bit) but where does the delta T come from to get UTC? ... Unplug the atomic clocks for a few seconds (which may be taken as the definition of a discontinuity in higher civilization), and even the professional timekeepers who built the devices would be unable to recover TAI. ... Actually, assuming somebody remembered to make a note of TAI-UTC before forgetting to put a shilling in the meter for the atomic clock TAI is exactly as recoverable as UTC in the short term when it's possible to work out the number of leap seconds which would have been inserted or removed. Longer term it would be harder, of course, but why would that matter? Ed.
Rob Seaman wrote: Doubt I can lay my hands on the copy of ISO 8601 from my Y2K remediation days. Anybody want to comment on whether it actually attempts to convey the Gregorian algorithm within its pages? Yes, it does. This International Standard uses the Gregorian calendar for the identification of calendar days. The Gregorian calendar provides a reference system consisting of a, potentially infinite, series of contiguous calendar years. Consecutive calendar years are identified by sequentially assigned year numbers. A reference point is used which assigns the year number 1875 to the calendar year in which the “Convention du mètre” was signed at Paris. The Gregorian calendar distinguishes common years with a duration of 365 calendar days and leap years with a duration of 366 calendar days. A leap year is a year whose year number is divisible by four an integral number of times. However, centennial years are not leap years unless they are divisible by four hundred an integral number of times. This is from section 126.96.36.199 of the final draft of ISO8601:2000. It's also an interesting quote in that it suggests that the 1875 Convention du mètre is important in the international definition of the Gregorian calendar. Anybody know more on that? Ed.
Ed Davies scripsit: If only the 24:00 for end of day notation wasn't in the way we could look at positive leap seconds as just being the result of deeming certain days to be a second longer than most and just use 24:00:00. We wouldn't have to muck with the lengths of any of the hours or minutes within that day. John Cowan replied: That amounts to saying that some days have 24 hours, whereas others have 25 hours, 24 of them being 3600 seconds long and the 25th being 1 second long. IMHO that is worse. No, it amounts to saying that some days are 24 hours and 1 second long. When you're half a second from the end of such a day you are 24 hours, zero minutes and half a second from the start. If you had a 1'6 piece of string you wouldn't say it's a two foot piece of string but the second foot is only 152.4 mm long. (Well, I think you wouldn't, though I think some politicians might.) As Rob has just pointed out in a parallel thread 23:59:60.5 and 24:00:00.5 can be treated as equivalent. It just seems to me that the second of these notations fits in with the normal use of sexagesimal somewhat better. Ed.
Markus Kuhn wrote: With the 24-h notation, it is a very useful and well-established convention that 00:00 refers to midnight at the start of a date, while 24:00 refers to midnight at the end of a date. Thus, both today 24:00 and tomorrow 00:00 are fully equivalent representations of the same point in time. Writing 24:00 to terminate a time interval at exactly midnight is pretty common practice and is even sanctioned by ISO 8601. I agree with all that is written here - the 24:00 notation is indeed both useful and well-established and is sanctioned by ISO 8601. However, it raises a point which has been floating around in the back of my head for a while. The problem is that this notation means that we can't have second identifiers after 23:59:59 in sexagesimal - we have to break out of that system to use 23:59:60 which seems quite ugly, at least to me. If only the 24:00 for end of day notation wasn't in the way we could look at positive leap seconds as just being the result of deeming certain days to be a second longer than most and just use 24:00:00. We wouldn't have to muck with the lengths of any of the hours or minutes within that day. This would scale much better when, eventually, days get longer than 86401 seconds. Presumably before that, it'd also work better for Martian days going to 24:39 and however many seconds it is. Ed.
Rob Seaman wrote: All proposals (other than rubber seconds or rubber days) face the same quadratically accelerating divergence between clock and Earth. By rubber seconds you, presumably, mean non-SI seconds. What do you mean by rubber days? I'd guess you mean days which are divided into SI seconds but not necessarily 86 400 of them. Just to be clear, is that right? Ed.
James Maynard wrote: I wonder, though, whether those in the other camp would find it acceptable to have the standard time and frequency stations not only broadcast UTC and DUT1 (= UT1 - UTC, to 0.1 s resolution), but also to broadcast DTAI (= TAI - UTC, 1 s resolution)? A full implementation needs not just the current DTAI value but also the full history, for conversion of past date/times. Ed.
The way I think exploration in this group should be going is to seriously examine what engineering steps can be taken to deal with leap seconds properly. This means looking at changes to Posix and NTP, new protocols for disseminating leap second information, new APIs for accessing clock information with different properties and so on. Also, possible changes to leap second scheduling to give longer notice. If, taking all of those together, we find that there are important use cases which cannot be covered then we've made a good case for scrapping leap seconds. The obverse is to look at the use cases (e.g., in astronomy and navigation) which require UT1 (or other Earth orientation information) and look at ways for them to recover that from an atomic timescale. If there are, again, any important use cases which cannot be covered then there's a good case for keeping leap seconds. My suspicion is that actually all cases are reasonably easy to deal with with only a little care. In that case, it becomes an engineering trade-off - which of keeping or scrapping leap seconds causes the least hassle. Starting from a clean slate, I'd guess that doing without leap seconds would be the right answer - at least now, though probably three decades ago the right decision was made for that time. On the other hand, they're here now and I'm far from sure that change would be worthwhile and we don't know for sure what the balance will be in another three decades. What isn't helpful is to have two groups each starting from an assumption or even definition that one or other answer is right repeatedly shouting past each other - particularly using arguments which are tricky to the point of dishonesty, blunt rudeness, personal attacks and so on. I hope we can all continue this discussion in a more positive manner. Ed.
Rob Seaman wrote: I hope we can all continue this discussion in a more positive manner. I'm of the opinion that messages on this list (no matter how tricky :-) are always positive. Timekeeping is a fundamental issue. It would be remarkable if there weren't diverse opinions. Any negative aspects of this discussion are related to those who don't choose to participate. Which is to say, those who claim to have decision making authority over UTC at the ITU, for instance. The folks on this list appear to cluster into two groups (speak up if your opinion diverges from both): 1) Civil time should remain layered on UTC. UTC should remain largely unchanged. Leap seconds should continue. and 2) Civil time should be layered on some flavor of interval time. That timescale might be a variation of TAI called TI. TI will not have leap seconds. OK so far. Actually, I've yet to see any argument which would make me deeply unhappy with either of these outcomes. The proposal submitted to the ITU is neither of these. It is: 3) Civil time should remain layered on UTC. UTC should be modified to no longer be a useful approximation to universal time. Leap seconds will be issued 3600 at a time. You all know where I stand - but there are worlds of difference between #2 and #3 as alternatives to #1. All three proposals face the same looming quadratic emergency. Again, OK so far except perhaps a quibble that it seems to be widely expected that the leap hour probably would never happen. What bothers me about this discussion is that we don't seem to be able to get beyond bouncing backwards and forwards between 1 2. As soon as anybody puts up any proposal for further detail with respect to either of these possible outcomes almost all of the response comes back in the form of arguments for the other outcome rather than sensible discussion of the idea in itself. joke Maybe for the next little while we should assume one or other outcomes each week (1 in odd ISO 8601 numbered weeks, 2 in even numbered weeks) and carry on all the discussion in that context. /joke Ed.
Markus Kuhn wrote: A new Internet-Draft with implementation guidelines on how to handle UTC leap seconds in Internet protocols was posted today on the IETF web site: Coordinated Universal Time with Smoothed Leap Seconds (UTC-SLS), Markus Kuhn, 18-Jan-06. (36752 bytes) http://www.ietf.org/internet-drafts/draft-kuhn-leapsecond-00.txt Accepting that UTC-SLS is not the right choice for some applications and that operating system APIs should be very clear on what timescales are being served up in different cases, I think UTC-SLS is a valuable contribution and a good choice for the default timescale for quite a lot of systems. In particular, it would be a good substitute in current APIs which claim to return UTC but actually don't handle leap seconds. Appendix A argues against putting the adjustment interval after the leap second (method 4a) by pointing out that some time signals contain announcements of the leap second before it happens but not after. I think a stronger argument against this method of adjustment is that during positive leap seconds UTC and UTC-SLS would be indicating different dates: UTC UTC-SLS(A.4a) 2005-12-31 23:59:58 2005-12-31 23:59:58 2005-12-31 23:59:59 2005-12-31 23:59:59 2005-12-31 23:59:60 2006-01-01 00:00:00 2006-01-01 00:00:00 2006-01-01 00:00:00.999 2006-01-01 00:00:01 2006-01-01 00:00:01.998 (Exact fractional times depending on whether the correction interval starts at the start or the end of the leap second). This is a pity, in my opinion, because I think it would be nice to leave open at least the option of implementing UTC-SLS as simply steer your clock towards UTC at the rate of 1ms per second though I understand that that wouldn't be the right choice in many cases. Ed.
Ed Davies: Appendix A argues against putting the adjustment interval after the leap second (method 4a) by pointing out that some time signals contain announcements of the leap second before it happens but not after. Rob Seaman: Right, ... Ed Davies: I think a stronger argument against this method of adjustment is that during positive leap seconds UTC and UTC-SLS would be indicating different dates: Rob Seaman: This may be a fact - it does not itself constitute an argument. An argument would have to answer the question: So what? You're right - I left the denouement implicit. With this method (4a) UTC-SLS would not have the property listed in section 3: the time always equals UTC at full or half hours. I think this is a valuable property; as the text following the 4a), 4b) and 4c) options notes: ...would be reached at midnight, which is a time commonly used to schedule events and deadlines. I hope that makes sense. Ed.
Poul-Henning Kamp wrote: In message [EMAIL PROTECTED], Francois Meyer writes: On Mon, 16 Jan 2006, Mark Calabretta wrote: 1. UTC and TAI share the same rate, the same origin, the same second. And therefore : UTC - TAI = 0 This is wrong, plain and simple wrong. Well, if by UTC you mean the count of seconds including leaps then this is right. Please don't come back until you have understood and accepted that this is not the case. Please don't be so rude - it really doesn't help to have conversations so polarised. Ed.
Michael Deckers wrote: I believe I'm now grasping what you mean: the rate of UTC is the same as the rate of TAI (since 1972), that is, the derivative d( UTC )/d( TAI ) = 1. ... This conversation is making something of a meal of a simple point. You can treat UTC as a real in either of two ways: If you don't count the leap seconds then the good news is that days are all 86 400 seconds long but the bad news is that the real is undefined during the leap second and there's a discontinuity (or rather, a surprising continuity in that at some point it's 23:59:59.99 and a whole second and a tiny bit later it's 00:00:00.). If you do count the leap seconds then that real is the same as TAI but the days it's divided up into aren't all 86 400 seconds long. Sort of like, is it a particle or a wave? :-) The truth is that UTC only really makes sense as a year, month, day, hour, minute and second value. Years have 12 months, months have 28, 29, 30 or 31 days, days have 24 hours, hours have 60 minutes, minutes have 59, 60 or 61 seconds. The use of the 23:59:60 notation is described in ISO 8601. Is it also specified in TF.460? If so, how do they relate it to the notion of DTAI? Ed.
Michael Deckers wrote: Sort of like, is it a particle or a wave? :-) At the risk of being misunderstood as sarcastic: if users of UTC were really expected to understand such strange concepts (Schrodinger time) I would plead for the immediate abolishment of UTC. Why cannot UTC be simply taken as the reading of a clock that runs at the same rate as TAI and that is is set back by a second every once in a while? Not really Schrodinger time - just time which you can usefully think of in different ways for different purposes. UTC can be taken the way you suggest most of the time (and that's clearly the way TF.460 wants to think of it), but it is then not well defined during the leap second itself. To deal with that properly you have to either: 1) think of a count of UTC milliseconds or whatever (including those in the leap second) which is then the same as TAI or 2) work in separate fields with a 61 second minute. The truth is that UTC only really makes sense as a year, month, day, hour, minute and second value. Years have 12 months, months have 28, 29, 30 or 31 days, days have 24 hours, hours have 60 minutes, minutes have 59, 60 or 61 seconds. Then why can the IERS express UTC in the MJD notation? We've recently had a question about this on this list which wasn't answered clearly. MJD 27123.5 means 12:00:00 on day 27123 if it's not a leap second day, but what does it mean on a day with a positive leap second? 12:00:00.5? I think it only works if that level of precision doesn't matter but maybe some document somewhere has a convention. Thanks for the further notes from TF.460. Ed.
Markus Kuhn wrote: Ed Davies wrote on 2006-01-13 11:45 UTC: The use of the 23:59:60 notation is described in ISO 8601. Is it also specified in TF.460? It originally comes from ITU-R TF.460, which is a standard for radio time signals. OK, thanks. Ed.
Wow, things have got really stirred up around here. Lots of interesting points but I'll just concentrate on one... Poul-Henning Kamp wrote: Well, the BIPM doesn't really want anybody to use TAI, their director said as much last year, and I can see where he is coming from on that one. Since the usual response of the pro-leap second lobby to people who want a uniform timescale is use TAI this is significant. Do you have any information or references on why the BIPM director said this? Ed.
Poul-Henning Kamp wrote: What a weird concept... Why not go the full distance and define a timescale for each particular kind of time-piece: and give each of them their own unique way of coping with leapseconds ? Ignoring the ridiculous parody - no, it's not a weird concept. Different timescales are useful for different purposes. Get used to it. The question is, where in the range of possible timescales is it most useful to put civil time. Ed.
Rob Seaman wrote: I said: all parties must certainly agree that civil time (as we know it) IS mean solar time. Ed says: saying that it IS civil time is probably a bit strong. Probably a bit strong is not precisely a staunch denial. It's not meant to be a staunch denial. I'm mostly supporting your argument - just trying to tone down one aspect which I think is overstated to avoid giving rehetorical ammunition to those who see things otherwise. Ed.
Keith Winstein wrote: Some minor glitches: (a) My Garmin 12XL GPS receiver (software version 4.53) did not register the leap second on its time display. It went from 58 to 59 to 00, and stayed one second ahead for the next few minutes until I rebooted it. Then it came up correctly. Interesting. My 12XL (software version 4.60) dealt with the leap second pretty well, I thought. It seemed to hold at 23:59:59 for two seconds. More details: http://www.edavies.nildram.co.uk/gps12xl-leapsecond/ Ed.
Not strictly on topic but probably of interest - a Bill in the UK House of Lords which I just came across when looking for something else: http://www.publications.parliament.uk/pa/ld200506/ldbills/048/06048.1-i.html A Bill To Advance time by one hour throughout the year for an experimental period; and for connected purposes. Well, at least we'd be in sync with most of the rest of the EC. Don't know if it'll get anywhere, of course. Ed Davies.
BBC article, Leap second proposal sparks row: http://news.bbc.co.uk/1/hi/sci/tech/4420084.stm I found this bit particularly amusing: The decision stemmed from the work 200 years previously of the first English Astronomer Royal, John Flamsteed, who calculated that the Earth rotated on its axis once every 24 hours. It must have been very confusing for people before it was realised that there were 24 hours in a day. You'd have thought somebody would have noticed the pattern before, though. And yes, my inner pedant has to note that it's once and a bit every 24 hours. Ed.
The BBC web site has an article about the leap second debate: http://news.bbc.co.uk/1/hi/sci/tech/4271810.stm Ed Davies.
Hornaday, Tem SPAWAR wrote: ... 3. As has been pointed out, some receivers also implement a clever hack to determine date that looks at UTC Leap Second (LS) value, and chooses a date based on WN, TOW, and LS. That is, the receiver implements a sliding 1024-week window whose limits are determined by the current value of LS. Current date will then reside within this 1024-week window. So, dropping leap seconds from UTC would cause these receivers to, eventually, go back 19 years on cold start? Hardly a major catastrophe but worth noting.
M. Warner Losh wrote: Also, many systems just aren't connected to a public network, or aren't connected to a network at all, but still have a need to have knowledge of leap seconds. Can you give any examples of systems which need to follow UTC to sub-second accuracy (running to their own little time- zone not being good enough), have a clock stable enough to do so and yet are not connected by any mechanism which could potentially provide leap-second information? Presumably there are a few but I find them hard to imagine. Ed Davies
Poul-Henning Kamp wrote: In message [EMAIL PROTECTED], Greg Hennessy writes: On Fri, 2005-08-12 at 08:44 +0200, Poul-Henning Kamp wrote: In message [EMAIL PROTECTED], Greg Hennessy writes: Will you support a proposal that keeps leap-second (or -minutes), but mandates that they be determined 40 or 50 years in advance ? Determined to what accuracy? Whatever the prediction is able to nail it to. I realize that this means that the bounds on |UT1-UTC| increases to about a minute, worst case, but already given todays predictive capabilities, I think it will be possible to keep the difference within a handful of seconds. I personally would NOT support such a proposal then. I might be willing to support a proposal that calls for broadcast of the difference of UT1-UTC as well of a long term determination of leap seconds. I took for granted that the UT1-UTC difference needs to be made electronically available. Currently UT1-UTC is made available on the broadcast time signals (WWV, Rugby, etc) to a resolution of 0.1 seconds. The encoding assumes |UT1-UTC| 0.9 seconds. Anybody have any idea how many systems actually make use of this? How this signal deals with the difference going over 0.9 second is, I think, a relatively minor point but it does need to be considered. Ed Davies.
Rob Seaman wrote: ... 3) Clarify the relationship between the civil second and the SI second. It may be too late to define a new unit of duration - whether Essen or Fressen - or perhaps it isn't. In any event, there are 86400 seconds per solar day, and that usage of the word second clearly differs from the SI unit which happens to have the same name. What are we going to do about it? (Certainly the ITU proposal does not address such issues.) ... Perhaps it would be a mistake for the relationship between civil and SI seconds to be anything other than identity. There isn't a clear separation between the use of one and the other. Consider, for example, a TV system. The frame rate and so on of the TV signal would, presumably, be defined in SI seconds. On the other hand, the schedule for the day would be in civil seconds. Of course, the schedule doesn't need to be held to the exact second (though it's often done pretty close to that) but somewhere in the chain there would have to be a switch over. Where, exactly? In other words, I'm suggesting that any attempt to fix leaps (seconds, minutes, hours or whatever) by use of rate changes in civil time (relative to atomic time) results in a cure which is worse than the disease. Whether or not there are 86400 seconds per solar day is something which should be up for discussion - not taken as a matter of definition. Clearly, there's a use for a solar second but perhaps it's even more specialised than a sidereal second. Ed Davies.
Steve Allen wrote: ... Basically, as a result of this case it has been established that the law in the United States must be in the public domain, and this holds true even if the law incorporates an external document merely by reference. So, if NIST tries again to modify the US Code in this fashion, and they succeed, then the ITU will lose their copyright over the content of TF.460 within the United States. Any citizen of the US will not be liable for reproducing it. ... Well, to be precise: any citizen of the US will not be liable for reproducing it *in the US*. However, I think that by international treaty the US has agreed to protect the copyright of works made in most other countries. Therefore, US law cannot adopt UTC by reference to TF.460. Ed.
Rob Seaman wrote: One would expect that at least as many applications worldwide depend on time-of-day as depend on date formats. Sorry, but this one doesn't expect anything of the sort. It seems to me that many more applications are interested in time durations than in the exact orientation of the Earth. Perhaps a way of moving the discussion on would be to make a list of applications requiring accurate Earth orientation information: 1. Pointing telescopes. 2. Pointing satellite dishes. 3. Celestial navigation. 4. Calculating the civil times of sunrise and sunset, lighting up times, etc. Others? Ed Davies.
Markus Kuhn wrote: When the scheduled transmission time arrives for a packet, it is handed with high timing accuracy to the analog-to-digital converter, I assume you mean digital-to-analog. ... [In fact, since short-wave transmitters frequently switch between programmes at the full hour, in discussions the hope was expressed that in practice nobody will notice.] ... Don't they also often transmit the time signal on the hour? Ironic, huh? This also raises the point that because the transmission is delayed a few seconds for buffering there is presumably a need for the studio to work in the future by a few seconds if time signals are to be transmitted correctly. Either having a commonly used standard time without leap seconds (TI), or having TAI widely supported in clocks and APIs would have solved the problem. Absolutely - and the second suggested solution doesn't need to take 20 years to be implemented. Ed.
Markus Kuhn wrote: A point that was made repeatedly at Torino is that the term UT traditionally meant in astronomy a time scale defined by the Earth's rotation, and that therefore a leap-second free uniform atomic time should not be called UTC, even if doing so would of course avoid the need to change the large number of national regulations that explicitely refer to UTC today. I understand that the term Universal Time was cooked up in the IAU in the 1920s, but does anyone know more details about the origin of and reasons for this curious choice of terminology? I always thought it was a rather odd selection of words: Universal Time is not linked in any way with the Universe as such. It is related to the position of the sun in a coordinate system that is attached to the crust of this particular piece of molten rock. Yes, pre-Copernican is the expression which occurred to me. UTC tracks the rotation of the Earth to +/-0.9 seconds. This new scale (assuming leap hours were actually implemented) would do so to +/- 3600 seconds, or so. There's a difference in scale but not in principle so the argument to get rid of the name UTC is not iron-clad. It's hard enough to persuade people that GMT is dead without having yet another time scale name to deal with. I think it's better to forget what the letters once stood for and just accept that the name UTC means the time scale which is the basis of civil time: the one you add various offsets, in hours and sometimes minutes, to in order to get local civil time. For example, ISO, though appearing to be an acronym, doesn't actually stand for a sequence of words: http://www.iso.org/iso/en/aboutiso/introduction/index.html Ed.
Ed Davies wrote on 2003-05-27 13:56 UTC: Slightly more relevantly: I was a bit surprised that you did not put more emphasis on the need to distinguish the different types of time scales an application program can ask for from an operating system, as your ctime library highlights. Markus Kuhn replied: I had thought about this, but I concluded that this would be out of the scope of the ITU-R, who are in the business of standardizing time signal broadcasts, and not operating system APIs. Fair point, but if I might summarise what I think is a slightly generalised version of your argument: 1. There's no single perfect timescale for all application requirement combinations (keeps close to UT1, SI seconds, 86 400 second days, etc) - because some combinations of requirements are contradictory. 2. We need to make up timescales for specific combinations of requirements not catered for by existing timescales (e.g., UTS if you are willing to relax the SI second requirement but don't want to use UT1 for sensible reasons). 3. We have to live with lots of timescales - please fix the radio signals to make this easier. You cover points 2 and 3 well but I think rather assume point 1 which is a pity as you are in a good position to illustrate it. If this point was already well understood then perhaps there wouldn't be the same pressure to fix UTC in the forlorn hope of somehow making it perfect. Ed.