On 8/8/19 2:51 AM, Tim Dunker wrote:
Dear Ralph

I keep all our GNU/Linux machines on UTC (i.e., <<Etc/UTC>>). Our
timezone is off by one or two hours, but the actual offset does not
matter to me. What matters to me is to have all systems using the same
timezone, and for our purposes, nobody cares about our local time.

Can the same thing be done in practice with TAI?

Yes, I guess so, but you have to go to much greater lengths to get it to
work. Red Hat has a nice article [1] on their website on leap second
handling in the kernel and clients like ntpd and chronyd. You have
probably also read a thread on superuser.com [2], where ntpd and chronyd
problems are discussed.

TAI would probably be the more logical way to store and do
calculations with time, only including leap seconds when formatting time
for human consumption. Or am I wrong in this?

Maybe I am just dumb, but I personally see no advantage of doing this
extra work. Is TAI a logical choice while UTC is not? Hm. We know when a
leap second is inserted (or removed, even though it has yet to happen),
so for all human-readable stuff (log files, data analysis, ...), I am
happy to have everything on UTC.

That being said, TAI is very nice if you acquire data continuously and
leap second handling is not ideal on the instrument in question. But as
long as you do not do that, it seems easier to me to go the other way
around: keep everything on UTC, but convert to TAI when necessary.


If you're setting up events that have to occur at some time in the future, relative to a time now, TAI is your friend, because you don't have to worry about crossing a leapsecond boundary.

If it's June 29th, and I want to schedule an event to occur on July 3rd, exactly 400,000 seconds from now, it's nice to not have to worry about whether anyone is counting leap seconds.

Likewise, if you are processing a stream of data, it's nice to be able to just subtract time1 from time2, and not have to worry if the timedelta needs an adjustment.

Historically, in the space business, spacecraft measured their own time in terms of clock ticks, and you'd uplink commands in terms of clock ticks, for that spacecraft. Some folks on the ground keep track of time correlation, and adjusting for tick rates, earth received time, etc. so that data collections, trajectory correction burns, etc. all happen at the right time.

Fine if your one spacecraft is Cassini and you have a team of dozens to manage it. Or even a Mars rover and a Mars Orbiter that need to communicate with each other - there's folks on the ground who can figure it out, and you can manually avoid doing transfers or activities across a leap second. We actually did this with ops on SCaN Testbed on ISS - we shut down before midnight UTC, waited until around 00:30 UTC, just to make sure the leap second had propagated around, including any "smearing", and started ops back up to finish the experiment.


Not so fine if you have 100 spacecraft, each with different clocks, communication schedules, etc.

Do the calculations and data storage/retrieval in a monotonically increasing, constant rate (or at least continuous rate for the first couple derivatives) time scale, and convert to whatever you want for human interpretation. This is especially true if there's more than one system involved, because then you don't have to worry if the two systems agree on their interpretations.




_______________________________________________
time-nuts mailing list -- [email protected]
To unsubscribe, go to 
http://lists.febo.com/mailman/listinfo/time-nuts_lists.febo.com
and follow the instructions there.

Reply via email to