Hi Donald, > From: Griggs, Donald [mailto:[EMAIL PROTECTED] > Sent: Monday, March 01, 2004 10:46 AM > > I suppose that for a system to really care about a leap > second jump, it > would have to be at least reasonably synched to the NIST clocks as a > precondition -- otherwise the normal computer clock drift > would mean that the clock is off by multiple seconds routinely.
Not necessarily - you're assuming that the events stored in the database correspond to events that took place on that computer, or at least that use that computer's clock as a frame of reference. But the events can take place in any setting whatsoever. For instance, if the events represent some sort of timing for stock trades, or ..., then the database host's internal clock has no relevance. Imagine a (contrived) situation where you start with a datetime like "June 1, 1972 at 12:00:00" represented as a Unix epoch time (which should track leap seconds correctly), then repeatedly add 60 seconds to it. When you pass the leap second at the end of 1972, you'll eventually get to "January 5, 1973 at 11:59:59". By contrast, if you start with the same datetime in SQLite and perform the same operation, you'll eventually get to "January 5, 1973 at 12:00:00", and the results won't agree. (Here I'm assuming that SQLite doesn't do leap seconds, but maybe Richard will reply that it does.) This could potentially violate some application assumptions, and bugs could ensue. By contrast, if you repeatedly add one *minute* (which is not always the same as adding 60 seconds), both systems should perform the same. I'm not suggesting this error is catastrophic, merely that it's likely present, and I'm not sure the members of this list necessarily have the expertise (or desire) to implement the date/time functions in a really correct way, though they may expect them to *work* in a correct way. -Ken --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]