Re: Introduction of long term scheduling
On Jan 6, 2007, at 11:36, Poul-Henning Kamp wrote: B. i) Issue leapseconds with at least twenty times longer notice. This plan might not be so good from a software engineering point of view. Inevitably software authors would hard-code the known table, and then the software would fail ten years later with the first unexpected leap second. At least with the present system, programmers are (more) forced to face the reality of the unpredictability of the time-scale. -- Ashley Yakeley
Re: Introduction of long term scheduling
On Jan 6, 2007, at 13:47, Poul-Henning Kamp wrote: In message [EMAIL PROTECTED], Ashley Yakeley writes: On Jan 6, 2007, at 11:36, Poul-Henning Kamp wrote: B. i) Issue leapseconds with at least twenty times longer notice. This plan might not be so good from a software engineering point of view. Inevitably software authors would hard-code the known table, and then the software would fail ten years later with the first unexpected leap second. Ten years later is a heck of a log more acceptable than 7 months later. Not necessarily. After seven months, or even after two years, there's a better chance that the product is still in active maintenance. Better to find that particular bug early, if someone's been so foolish as to hard-code a leap-second table. The bug here, by the way, is not that one particular leap second table is wrong. It's the assumption that any fixed table can ever be correct. If you were to make that assumption in your code, then your product would be defective if it's ever used ten years from now (under your plan B). Programs in general tend to be used for awhile. Is any of your software from 1996 or before still in use? I should hope so. Under the present system, however, it's a lot more obvious that a hard-coded leap second table is a bad idea. -- Ashley Yakeley
Re: Introduction of long term scheduling
On Jan 6, 2007, at 14:43, Poul-Henning Kamp wrote: So you think it is appropriate to demand that ever computer with a clock should suffer biannual software upgrades if it is not connected to a network where it can get NTP or similar service ? Since that's the consequence of hard-coding a leap-second table, that's exactly what I'm not proposing. Instead, they should suffer biannual updates to their leap-second table. Doing this is an engineering problem, but a known one. Under your plan B, however, we'd have plenty of software that just wouldn't get upgraded at all, but would simply fail after ten years. That strikes me as worse. I know people who will disagree with you: I don't think you're serious. Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 [EMAIL PROTECTED] | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Don't forget | one second off since 2018. :-) -- Ashley Yakeley
Re: Introduction of long term scheduling
On Jan 6, 2007, at 16:18, M. Warner Losh wrote: Unfortunately, the kernel has to have a notion of time stepping around a leap-second if it implements ntp. There's no way around that that isn't horribly expensive or difficult to code. The reasons for the kernel's need to know have been enumerated elsewhere... Presumably it only needs to know the next leap-second to do this, not the whole known table? -- Ashley Yakeley
Re: Introduction of long term scheduling
On Jan 5, 2007, at 20:14, Rob Seaman wrote: An ISO string is really overkill, MJD can fit into an unsigned short for the next few decades This isn't really a good idea. Most data formats have been moving away from the compact towards more verbose, from binary to text to XML. There are good reliability and extensibility reasons for this, such as avoiding bit-significance order issues and the ability to sanity-check it just by looking at it textually. As the author of a library that consumes leap-second tables, my ideal format would look something like this: a text file with first line for MJD of expiration date, and each subsequent line with the MJD of the start of the offset period, a tab, and then the UTC-TAI seconds difference. That said, my notion of UTC is restricted to the step- wise bit after 1972, and others might want more information. -- Ashley Yakeley
Re: A lurker surfaces
On Jan 1, 2007, at 22:56, Steve Allen wrote: Then let's improve the infrastructure for communicating the best estimation of earth orientation parameters. Then in a world of ubiquitous computing anyone who wants to estimate the current rubber-second-time is free to evaluate the splines or polynomials (or whatever is used) and come up with output devices to display that. This is fine, but leaves open the question of when 9:00am is here in Seattle. And why not transmit rubber-second-time as well, where technically feasible (such as over the internet)? What is a good source of earth orientation parameters, btw? -- Ashley Yakeley
Re: A lurker surfaces
On Jan 2, 2007, at 05:15, Zefram wrote: A technical issue: broadcast time signals are phase-locked with the carrier, which is at some exact number of hertz. If the time pulses are every civil second, and that is now 1.00015 s (as it was in 1961), it can't be synchronised with the (say) 60 kHz carrier that must still have exactly 6 cycles per SI second. The obvious solution is to transmit rubber time on a rubber frequency. -- Ashley Yakeley
Re: A lurker surfaces
On Jan 2, 2007, at 11:40, Warner Losh wrote: The second technical problem is that the length of a second is implicitly encoded in the carrier for many of the longwave time distribution stations. 10MHz is at SI seconds. For rubber seconds, the broadcast would drift into adjacent bands reserved for other things. At 1000ns, the carrier would drift by 10Hz. Surely the bandwidth is big enough for that? Also, GPS would have to remain in SI seconds. The error in GPS time translates directly to an error in position. Approximately 1m/ns of error (give of take a factor of 3). Rubber seconds would require that the rubber timescale be off by as much as .5s. So GPS has to remain in GPS time (UTC w/o leap seconds, basically). That means that the rubberness of the seconds would need to be broadcast in the datastream. GPS is TAI. I'm not proposing abandoning TAI for those applications that need it. -- Ashley Yakeley
Re: A lurker surfaces
M. Warner Losh wrote: GPS is also used for UTC today. Many ntpd's are stratum 1 tied to a GPS receiver. I imagine two parallel time infrastructures, one synchronised to TAI, the other to rubber mean universal time. Stratum 0 devices for the latter would probably have to use radio. So, sure, there's an infrastructure cost for a sensible time of day... ntpd is UTC, by definition. I wonder how easily NTP could be generalised to transmit different timescales without too much confusion? Using different UDP port numbers might be one option. -- Ashley Yakeley
Re: A lurker surfaces
Magnus Danielson wrote: The detailed introduction of the frequency corrections in various sources was different, and getting a coherent view of where UTC actually where was difficult. Since then we have grown to depend on UTC transmission to a higher degree than we did back then. Infact, for many purposes our UTC transmissions is also there to get us SI second traceability for a whole range of applications. If we brake the SI second in UTC a whole lot of technology will break. Rubber seconds would be a plain nightmare to introduce and maintain compared to the strange and slightly uncomforting dreams we have with the current leap second scheduling. If the list will forgive me for airily focussing on the ideal rather than the immediately practical... we should keep TAI and UTC as they are, but create a new timescale for civil time with a new name and its own separate infrastructure. Then we can persuade govenments to adopt it. UTC can then fade into irrelevance. No, GPS is not TAI. GPS run its own timescale and it is offset from TAI by 19 seconds, as given in BIPMs Circular T 227: I meant up to a known conversion. If you have some GPS time, you know it for TAI, and vice versa. That's not the case for UTC, since you don't know what the leap second offset will be if it's too far in the future. Of course you can also extract UTC from a GPS signal. -- Ashley Yakeley
Re: A lurker surfaces
Magnus Danielson wrote: The budget isn't there and the govrements already pay good money for the systems in place and is looking to get as much out of it as possible. Yes, you're probably right, they're likely to prefer to patch up something ultimately broken cheaply than fix it properly. I think the best that can be hoped for in the short term is a user-created infrastructure among those who care enough to bother. And just agreeing what the lengths of the seconds should be, or even the schedule for specifying them, is likely to be hard enough. Indeed. But it was not what you wrote. Eh, GPS time is TAI. You just have to know about the odd encoding... -- Ashley Yakeley
Re: A lurker surfaces
On Dec 30, 2006, at 17:41, Jim Palfreyman wrote: The earlier concept of rubber seconds gives me the creeps and I'm glad I wasn't old enough to know about it then! I rather like the idea, though perhaps not quite the same kind of rubber as was used. I'd like to see an elastic civil second to which SI nanoseconds are added or removed. Perhaps this could be done annually: at the beginning of 2008, the length of the civil second for the year 2009 would be set, with the goal of approaching DUT=0 at the end of 2009. This would mean no nasty unusualities, and match the common intuition that a second is a fixed fraction of a day. If NTP were to serve up this sort of time, I think one's computer timekeeping would be quite stable. And of course this will work forever, long after everyone else is fretting over how to insert a leap-hour every other week, or whatever. Software should serve human needs, not the other way around. Anyone needing fixed seconds should use TAI. Actually I was going to suggest that everyone observe local apparent time, and include location instead of time-zone, but I think that would make communication annoying. -- Ashley Yakeley
Re: A lurker surfaces
On Jan 1, 2007, at 17:03, John Cowan wrote: Michael Sokolov scripsit: The people who complain about leap seconds screwing up their interval time computations are usually told to use TAI. They retort that they need interval time *between civil timestamps*. To me that seems like what they are really measuring as interval time is not physical interval time, but how much time has elapsed *in civil society*. I think this point is quite sound, but I don't quite see what its implications are (or why it makes rubber seconds better than other kinds of adjustments). One implication is that a leap second insertion is a second of real time, but zero seconds of intuitive civil time. Rubber seconds are appropriate because we have rubber days. People who need absolute time have their own timescale based on some absolute unit (the SI second), but to everyone else, the second is a fraction of the day. -- Ashley Yakeley
Re: Mechanism to provide tai-utc.dat locally
On Dec 26, 2006, at 23:02, M. Warner Losh wrote: Of course, needing to know TAI-UTC offsets leads one to interesting situations. What does one do if one has TAI time, but not UTC and a conversion is asked for? If it's a time in the future rather than just the current time, the thread may have to block for years... Probably you'd want a quick function that returns a special don't know value, or throws exception, or returns a special result code (as COM). Then the caller can decide what to do. -- Ashley Yakeley
Re: Mechanism to provide tai-utc.dat locally
On Dec 27, 2006, at 06:29, Poul-Henning Kamp wrote: That's a pretty bad format. Computers are binary and having pseudo-decimal fields like tv_usec in timeval, tv_nsec in timespec and picoseconds in Haskell is both inefficient and stupid. The fractional part should be a binary field, so that the width can be adjusted to whatever precision and wordsize is relevant. It's impossible to accurately represent a millisecond using binary fractions. That would be unacceptable for most sub-second use. A better idea might have been to use Haskell's Rational type for the seconds offset, which is stored as two integers (for numerator and denominator). Instead I used a fixed-point type (internally just an integer from 0 to 86400). It does not separate integer and decimal part. -- Ashley Yakeley
Re: Mechanism to provide tai-utc.dat locally
On Dec 27, 2006, at 14:32, Poul-Henning Kamp wrote: It's impossible to accurately represent a millisecond using binary fractions. That would be unacceptable for most sub-second use. Reality check: with a 32bit fraction, the error would be 69 ps. ...which accumulates in arithmetic and causes equality comparisons to fail. This should hold: 1000 * 1ms == 1s It won't if you use a binary fraction. A better idea might have been to use Haskell's Rational type for the seconds offset, which is stored as two integers (for numerator and denominator). Instead I used a fixed-point type (internally just an integer from 0 to 86400). It does not separate integer and decimal part. Yes, let us make it as expensive as possible to operate on timestamps, so that everybody will have to invent their own faster type. NOT! I'm not seeing the problem here: you can represent an integer in the range 0 to 86400 with a 64-bit type. There's nothing faster than integer arithmetic. -- Ashley Yakeley
Mechanism to provide tai-utc.dat locally
Hello, I just joined the leap seconds list. I wrote the time package for the Haskell programming language. http://semantic.org/TimeLib/ I include code for making conversions between TAI and UTC, given a leap-second table. I also include code for parsing a tai-utc.dat file into a leap-second table. I do not, however, simply include a leap- second table as any program compiled with one would be out of date after six months. This has led me to consider run-time methods of obtaining leap-second information, and how that might be standardised for use by software authors. For instance, I imagine a software package that established a well-known place in the directory hierarchy to find tai-utc.dat and perhaps other earth data files, and was responsible for keeping them up to date. Other software could then make use of the package without each having to implement their own mechanism. Does this strike people as worthwhile? Does anyone know of an existing effort along these lines? Thanks, -- Ashley Yakeley Seattle WA