I'd be interested to hear how one measures the leading edge of the human life to death transition pulse with a precision that makes the UT1 vs. UTC question even relevant. A husband has a will leaving everything to his wife, or if she dies first, to their children. The wife has a will leaving everything to her secret lover. They are together in a car crash, and are put on life-support systems including heart monitors. They both, sadly, die at around the same time; both have a last-recorded heartbeat. Pete.
On Sat, 19 Jul 2003, Markus Kuhn wrote: All modern digital broadcast transmission systems introduce significant delays due to compression and coding. It is therefore common practice today that the studio clocks run a few seconds (say T = 10 s) early, and then the signal is delayed by digital buffers between the studio and the various transmitter chains for T minus the respective transmission and coding delay. This way, you can achieve that both analog terrestrial and digital satellite transmissions have rather synchronous audio and video. Otherwise, your neigbor would already cheer in from of his analogue TV set, while you still hear on DRM the live report about the football player aproaching the goal. But that's exactly what does happen, analog TV is ahead of digital, often leading to asynchronous cheering coming from different parts of the house. There are a couple of problem though with delayed live: - One is with the BBC. They insist for nostalgic reasons to transmit the Big Bang sound live, which cannot be run 10 seconds early in sync with the studio clock. - Another are live telephone conversations with untrained members of the radio audience who run a loud receiver next to the phone. The delay eliminates the risk of feedback whisle, but it now ads echo and human confusion. The former can be tackled with DSP techniques, the latter is more tricky. But then there's often a deliberate delay introduced so the editor can push the cut-off button on the first f - The third problem is that in the present generation of digital radio receivers (DAB, DRM, WorldSpace, etc.), the authors of the spec neglected to standardize the exact buffer delay in the receiver. Intestingly, I have noticed Radio 5 live is synchronous or even slightly ahead of analogue on Digital Terrestial. I put it down to relatively instantaneous compression/decompression of audio cf video streams. (NICAM is near-instantaneous on 15-year old technology) Pete.
Surely the point about the slaughterhouse is the thought of the throat slasher getting a couple of seconds ahead of the brain stunner. As for the issue of whether the slaugherhouse needs syncing to an external clock, the point is that with the prevelance of ntp, it is just as easy, or easier, nowadays to synchronize all devices to a global time standard than it is to set up a local arbitrary clock and synchronize to that. Implicit in that is the assumption that somebody-cleverer-than-me is feeding me `the right answer' via NTP, and that the software I bought follows the sometimes complex but clear cut rules regarding such issues as leap seconds. A proposal that seems to me to go half way to satisfy both pro and anti leapers is the one where computers switch to using TAI internally. We could start thinking about that now and possibly start implementing it. If the leapers prevail, in future the date command will still output UTC+timezone, if the anti-leapers prevail, it can output TAI+ something (zero?!) + timezone. Peter.
On Tue, 30 Aug 2005, M. Warner Losh wrote: Leap seconds cost actual companies lots of $$$. I know that I've personally put in about 50 hours to leap second issues since July 1, and others in my company have put in even more in testing, shipping equiptment to the simulator facility, writing simulation software for testing all our products that couldn't be shipped to the simulation facility, etc. While it is the cost of doing business, implementing and conforming to this standard is expensive. Warner Part of the previous traffic in this interminable argument is that hard figures are lacking for both the implementation of leap seconds and for their demise. I would have thought that part of the answer to the difficulty in implementation and testing would be to use an open-source library of tried and tested algorithms. I don't quite understand why software engineers seem to feel the need to write new leap-second handling code every time they invent a new gadget. Peter.
I present below a distillation of many of the comments which mention POSIX from the leapsecs mailing list. I apologise unashamedly for my cuts and selections, and apologise profusely on the off chance I got the attributions wrong. I started doing this for my own use, but then thought perhaps some of you might be interested in seeing the material again in summary form. It's interesting that some people quote the POSIX standard to support their argument about leap seconds, yet have variant opinions. Perhaps this is telling us that some of us find the POSIX standard a bit too hard to understand, or, perhaps, that it could be better written. Also, there does seem to be a tendency for people to think that the problems with timekeeping are best solved by others changing their ways, after all, I'm already perfect. Now that we can synchronize clocks globally and beyond, we see that there are problems all over the shop. A global solution is required, not just a fix to somebody else's bit of software, but which considers civil-legal time, scientific time, POSIX-like standards, ntp-like time distribution and implementation details. You wouldn't think a second here or there would matter that much... Peter. ---Selected Quotes from LeapSecs: Markus Kuhn considers: g) that numerous information and communication systems use an internal time scale based on a fixed length of the day of 86400 seconds, in which there exists no unique representation for points in time during an inserted UTC leap second, including the widely used POSIX time scale defined by ISO/IEC 9945-1:1996 in section 126.96.36.199, Garrett Wollman notes: The requirement that I've heard most commonly is much simpler: there must be 86,400 nominal seconds per nominal day, and nominal days must be the same ordinal and duration as provided by local law and custom. The POSIX specification makes the former requirement explicit, by giving a formula purporting to relate ``seconds since the epoch'' to a civil date and time (not accounting for time zones). Paul Eggert points out: the UTC markers are 23:59:59, 23:59:60, 00:00:00. Second -- and this is a more subtle point -- UTC is set back immediately after an inserted leap second, which means that if your clock has 86,400 seconds per day (as is required for POSIX applications, for example), then it should tick 23:59:59, 00:00:00, 00:00:00. The POSIX clock is not set back to 23:59:59; it is set back to 00:00:00. It is this sort of confusion (even among experts!) that causes many people to think that there must be a better way to handle civil time, a way that does not involve discontinuities. Michael Deckers responded: What you describe may be required by POSIX but it is wrong for UTC: the second starting with the marker 23:59:60 is called leap second in UTC and (more importantly) it belongs to June or December, not to July or January. I quote from [ITU-R Rec. 460-4, section 2.2]: A positive leap-second begins at 23h 59m 60s and ends at 0h 0m 0s of the first day of the following month. Markus Kuhn adds What *does* literally get set back indeed is the POSIX clock and similar representations, which use by definition the same numeric second counter value to represent (day D) 23:59:60.xxx and (day D+1) 00:00:00.xxx. That is obviously unpleasant, as it leads to non-causal timestamps and it is easy to construct scenarios where this could mess up things in theory. and, plugging UTS: It's merely a common missunderstanding of the definition of POSIX timestamps. There exists already a perfectly simple algorithmic leap-second-table-free mapping between Unix-style timestamps and UTC, specified formally in ISO/IEC 9945-1:1996, Section 188.8.131.52 http://www.cl.cam.ac.uk/~mgk25/volatile/posix-2-2-2-113.pdf Unix timestamps have always been meant to be an encoding of a best-effort approximation of UTC. They have always counted the non-leap seconds since 1970-01-01. The only minor problem is that the value 23:59:60 cannot be represented uniquely in the time_t encoding, but that is in practice elegantly circumvented by changing the length of the Unix second near a UTC leap second by less than a percent (UTC smoothing, something which I suggest should be standardized formally for Unix-style timestamps to improve interoperability of timestamps near leap seconds). The older POSIX.1:1996 interpretation above could be quoted as implying that time_t has to jump back during a leap second because the formula provided leads to the same numeric value for 23:59:60 and 00:00:00 the next day (unfortunately, that is still what the Linux NTP kernel support does today). The POSIX.1:2001 revision softened the definition in order to include the option of UTC smoothing into what it allows, making it possible to use a more graceful leap second representation in time_t, such as for example my UTS proposal. Glen Seeds
On Fri, 18 Nov 2005, Ed Davies wrote: On the other hand, I rather snigger at the reservation of the word universal to mean time based on the Earth's rotation. It's all rather parochial but it is the established terminology. Doesn't Universal hint at the join of the SI second and Solar Time? Pete. Ed.
Interesting to see a commercial company using leap seconds as a positive marketing play. Pete. -- Forwarded message -- Date: Thu, 1 Dec 2005 10:02:01 -0500 From: Symmetricom TTM Division [EMAIL PROTECTED] To: [EMAIL PROTECTED] Subject: =?iso-8859-1?B?TGVhcCBTZWNvbmQgJiBTeW1tZXRyaWNvbSCWIFdoYXQgWW91IE5lZWQgdG8gS 25vdw==?= To view this newsletter in full-color: http://www.imakenews.com/symmlists/index000100283.cfm?x=b6dMqnG,b24r48gC ARE YOUR IT SYSTEMS READY to ADD a LEAP SECOND on DECEMBER 31, 2005? A Leap Second will be added by the world's timekeepers on December 31, 2005. The leap second insertion increases the length of the last minute of the UTC day to 61 seconds. ** The Effect of Leap Seconds on Symmetricom products ** Symmetricom has completed simulation testing of all of our time and frequency receivers, time and frequency processor modules and network time servers to characterize their behaviors, and then appropriately distribute the leap second information. This is not just our most recent products, but also those that have been out in the field for a quite a while. To find out the Leap Second test results for your Symmetricom product, or if you are interested in the reasoning behind Leap Seconds just click on or visit the following URL: http://www.symmttm.com/leapsecond/ -| POWERED BY: http://www.imninc.com/eletra/redirect.cfm?a=symmlistsx=b6dMqnG,b24r48gC From Symmetricom, 34 Tozer Road, Beverly, MA 01915 USA To be removed from this list, use this link: http://www.imakenews.com/eletra/r.cfm?x=symmlists%2Cb24r48gC%2Cb6dMqnG To receive future messages in HTML format, use this link: http://www.imakenews.com/eletra/change.cfm?x=symmlists%2Cb24r48gC%2Chtm To report abuse and to be removed from this list: http://www.imakenews.com/eletra/abuse.cfm?x=symmlists%2Cb24r48gC
On Fri, 9 Dec 2005, Clive D.W. Feather wrote: boundary than to deal with stuff coming in. In other words, it's easier to only buy widgets from ISO 9000 compliant suppliers than to provide an inbound widget quality test department. From what I understand from some of the recent emails, you would not have to provide an inbound widget quality test department, but rather an inbound widget manufacturer's quality control procedure test department. This is to keep consistency with the model that ISO9000 compliance means your products can be crap as long as you document how you arrive at that assessment. Pete.
Perhaps I was a little hard, and I certainly make plenty of typos when dashing off a semi-formal email such as this. When publishing a techical paper, however, in a journal or on the Web, I do try and give it a quick proof (preferably by someone else). Such in your face spelling errors as the one I barked at indicate that the author did not check the text too carefully, and gives a statistical hint that there may be other, none-obvious errors, perhaps in the grammar, changing the meaning from that intended, or in the mathematical formulation. Or the guy's software. Pete. On Tue, 3 Jan 2006, Randy Kaelber wrote: On Tue, Jan 03, 2006 at 07:42:31AM +, Peter Bunclark wrote: And these Rocket Scientists can't even spell. Perhaps they can't read, I hope you are now aware that your spelling on this list from this point forward now needs to be flawless. ;-) -- Randy Kaelber[EMAIL PROTECTED] Scientific Software Engineer Mars Space Flight Facility, Department of Geological Sciences Arizona State University, Tempe, Arizona, USA
On Sat, 7 Jan 2006, Poul-Henning Kamp wrote: What Astronomers use UTC for, in your own many times repeated words, is a convenient approximation of UT1, and consequently it follows that if instead of an approximation astronomers used the Real Thing, leap seconds could harmlessly be removed from UTC. Too simple; many old telescopes, with equatorial mounts, such as the historic telescopes at the Institute of Astronomy where I work, do indeed use UTC as a UT1 approximation. The time error involved in this is a small offset in one axis which you calibrate out on a clock star. Research-quality telescopes, in particular all the ones built in the last few decades on alt-azimuth mounts, do of course use UT1; a 0.9s error would be a complex ~10 arcsec error in both axes and give a quite useless pointing performance. However, UTC is often used as a UT1 delivery system; because it's an international standard, and is widely available, and DUT1 is guarenteed to be less than 0.9s, it's a natural choice for supplier of time. Interestingly, because control algorithms tend to be rigorous, a large DUT1 probably would be ok in itself (there would be a cost involved in checking that this would be so) but certainly in the case of a couple of telescope control systems of which I have the required knowledge, the DUT1 input method does a 0.9 second range check. Peter.
On Sun, 8 Jan 2006, Tom Van Baak wrote: between astronomical and atomic timescales. Could we rephrase that between geophysical and atomic timescales ? Astronomers measure it and have to compensate for it, not cause it. Reminds me bitterly of the widely reported loss of Mars Climate Orbiter being due to a confusion of metric and *english* units, like it was our fault. Pete.
On Sun, 8 Jan 2006, Tom Van Baak wrote: Peter, So where do these modern telescope get UT1? Do you or The last time I was involved personally was during my time as a support astronomer at the Isaac Newton Group on La Palma in the early nineties. We had a radio receiver which required upcoming leapseconds to be entered manually ahead of time. This provided a one second per second UTC interrupt to the telescope control computers. The TCS computers were programmed with an upcoming leapsecond, and with the corresponding jump in DUT1. To compute fractions of a UTC second, the computer adds its own clock to the one-second interrupt count, which gives high precision. The whole system gives UT1 to high precision throughout a leapsecond event and beyond. Pete.
On Tue, 10 Jan 2006, Tom Van Baak wrote: have no leap seconds. Astronomers appear to avoid using MJD altogether. Good grief. MJD is used widely in astronomy, for example in variablility studies where you want a real number to represent time rather than deal with the complications of parsing a date. It tends to be written into the FITS header of practically every data file observed. Pete.
On Tue, 10 Jan 2006, Poul-Henning Kamp wrote: In message [EMAIL PROTECTED], Peter Bunclark writes: On Tue, 10 Jan 2006, Tom Van Baak wrote: have no leap seconds. Astronomers appear to avoid using MJD altogether. Good grief. MJD is used widely in astronomy, for example in variablility studies where you want a real number to represent time rather than deal with the complications of parsing a date. It tends to be written into the FITS header of practically every data file observed. So how do you deal with fractional days in that format ? with decimals. Pete.
On Mon, 9 Jan 2006, Tim Shepard wrote: wot, no attribution of quotes? and you still cannot even get it [TAI] reliably from your I still think NTP should have distribute TAI, but I understand using Was your failure to form a past-participle a Freudian slip? I'm with you if you really mean NTP should distribute TAI!!! Pete.
On Sun, 22 Jan 2006, M. Warner Losh wrote: The short answer is that you cannot get a time feed of TAI, so the So isn't this one of the things we want to fix in the brave new world of joined-up timekeeping? Distribute (very close to) TAI, keep the kernel PLLs sweet, move leap second handling to user-space and thus make debugging very easy, then everone can get their timescale of choice as a f(TAI)? Peter.
On Mon, 23 Jan 2006, John Cowan wrote: Rob Seaman scripsit: that it can be reliably recovered from observations whenever and wherever needed (once you are located with respect to a meridian, of course). I don't understand this. You can't shoot the mean sun with a sextant, only the friendly (apparent, whatever) sun. So at the very least you need an analemma. I don't think Rob meant the above to be a complete course on navigation! In any case, the majority of the world has managed to live with the fact that the day-of-month can no longer be recovered by examining the moon, although if we were still hunter-gatherers a purely lunar calendar would make a lot of sense. Good example of a timekeeping decision made by a (very tiny) minority over the majority. How nice indeed, it would be, if the months were fixed to match lunations. February could be a leap-month which changes length when necessary to keep the synchronization... Pete.
On Fri, 2 Jun 2006 [EMAIL PROTECTED] wrote: We intentionally try to be silent in this forum. Why? Peter.
On Thu, 8 Jun 2006, Rob Seaman wrote: Clive D.W. Feather wrote: March was the first month of the year; look at the derivation of September, for example. Makes the zero vs. one indexing question of C and FORTRAN programmers look sane. I've pointed people to the whole 7, 8, 9, 10 sequence from September to December on those (admittedly rare) occasions when the issue has come up. Presumably other languages agree in usage, which would be another indicator of the age of the names of the months. hang on I thought the numbering start Jan=1 ... Dec=10 and got interrupted when Julius Caesar put an extra month in and so did Augustus... Hands up if you wish you had the authority to swing that kind of timekeeping standardization adjustment. Pete.
On Thu, 8 Jun 2006, Rob Seaman wrote: I thought Julius renamed some high value summer month and wanna-be Augustus did likewise, stealing a day from February to make August the same length. If they put two extra months in, where were those 62 days originally? Yes of course, and a quick google as usual turns up a well-written account: http://www.infoplease.com/spot/99aughistory1.html
On Fri, 23 Jun 2006, Joe Fitzgerald wrote: Steve Allen wrote: Artist Felicity Hickson created a documentary of 23 people speaking for 23 seconds each. Did any of them start talking at 23:59:37 31 December 2005 UTC? If so, how long did they end up talking? The duration was timed in SI seconds, of course, rather than attempting the error-prone process of subtracting two calendar dates. -Joe Fitzgerald Pete.
On Wed, 13 Dec 2006, Ed Davies wrote: Rob Seaman wrote: I'm given to wonder how much of the friction on this mailing list is simply due to the shortcomings in the technology that implements it. I've appended a message I sent in August with four plots attached. Can someone tell me whether it is readable now or was successfully delivered back then? I rummaged around on the list archive and on archives accessibly via google and find no copy of this message that survived the communications medium. In Thunderbird on Ubuntu Linux it looked fine in both your original post and the repeat you attached - so any problems are down to the reader and not the transmission, I think. Ed. Fine on Solaris 10. Pete.
On Tue, 2 Jan 2007, Rob Seaman wrote: Daniel R. Tobias replies to Poul-Henning Kamp: Has anybody calculated how much energy is required to change the Earths rotation fast enough to make this rule relevant ? Superman could do it. Or perhaps he could nudge the Earth's rotation just enough to make the length of a mean solar day exactly equal 86,400 SI seconds. Only briefly. Consider the LOD plots from http://www.ucolick.org/ ~sla/leapsecs/dutc.html. The Earth wobbles like a top, varying its speed even if tidal slowing is ignored. Actually, rather than being merely a troublemaker, the Moon serves to stabilize the Earth's orientation. The Rare Earth Hypothesis makes a strong case that a large Moon and other unlikely processes such as continental drift are required for multicellular life to evolve, in addition to the more familiar issues of a high system metal content and a stable planetary orbit at a distance permitting liquid water. Without the Moon, the Earth could nod through large angles, lying on its side or perhaps even rotating retrograde every few million years. Try making sense of timekeeping under such circumstances. Rob Seaman NOAO Hang on a minute, statistically planets in the Solar System do not have a large moon and yet are upright; for example Mars comes very close to the conditions required to generate a leapseconds email exploder. Pete.
On Wed, 3 Jan 2007, Poul-Henning Kamp wrote: Hang on a minute, statistically planets in the Solar System do not have a large moon and yet are upright; for example Mars comes very close to the conditions required to generate a leapseconds email exploder. As far as I know the atmosphere is far to cold for that :-) Similar to our polar regions where whales scoff krill all summer long! A bit more mass - bit more atmospheric pressure, and ok maybe a bit closer to the Sun... Of course, life may have flourished on Mars 3 billion years ago and then the Martians introduced the leap hour and the rest is pre-history... Pete.
On Thu, 4 Jan 2007, Tony Finch wrote: On Thu, 4 Jan 2007, Zefram wrote: The solution is to just let the clock run, never adjust it, and treat it as an independent seconds count. You don't care about it showing the wrong time, because you don't treat its output as an absolute time. Instead, collect your data on how far out it is (or rather, what absolute time - output function it is computing) and add the epoch in software. Any number of users of the same clock can do this without treading on each other's toes. I think that's what I was suggesting :-) Tony. Indeed isn't this Rob's ship's chronometer? Also in the context of the mythical device which has to run many years into the future without referring to external leap-second tables, when interaction is eventually resumed you have more chance of recovering the true value of timestamps if it had a chronometer on board and not an incorrectly-set UTC clock. If contact with the device never is recovered, why did it matter what it thought the time was? Peter.
On Mon, 15 Jan 2007, Tony Finch wrote: On Mon, 15 Jan 2007, Peter Bunclark wrote: http://www.eecis.udel.edu/~mills/ipin.html That page does not seem to mention UTC... Look at the slides. Whoops. In my defense, there has been traffic elsewhere pointing out that authoring in powerpoint is a good way to hide information... And having said that, how utterly disappointing that this project is squandering the opportunity to come up with a time-distribution protocol that is not based on UTC. Once this new regime is bolted in to the space program, I guess we're stuck with it until the end of civilisation. Peter.