> Modified proposal:
> 
> (time-and-date) - return an integer count of
> non-leap seconds since the epoch.  The epoch
> is <given in standard>.  This can be used in
> calendar procedures.
> 
> (seconds-on-timer) - return a real number, as
> accurate as reasonable, which is the time in
> seconds since the interval timer was reset.
> 
> (time-of-reset) - an integer giving the
> time-and-date when the interval timer was
> reset (to zero).
> 
> Maybe there should be a way to create
> timers, in which case those last two
> procedures need an argument (the timer).

FWIW, I like this proposal.  I think it captures the important 
semantics that we need in programming.  A "timer object" is a 
very simple closure, and as a whole this strategy admits of 
several different implementation strategies depending on how 
the clock/counter works on the target device. 

On the naming of names, I think I'd go with a different name
for the procedure time-and-date mentioned above.  This is because
the name "time-and-date" leads me to expect the actual values of 
years/months/days/hours/minutes/seconds to be broken out in the 
return value somehow.  I would suggest an alternate name such as 
"calendrical-seconds" or "calendrical-clock" or something. As 
opposed to a different procedure "absolute-seconds" or "absolute-
clock" having exactly the same semantics w/r/t an absolute 
count of all seconds including leap-seconds since the epoch. 

As to the behavior of calendrical-seconds during intervals that 
include leap seconds, I am very much in favor of: 

a) making any distinction in behavior optional, because some 
devices and operating systems don't update leap-seconds until 
reboot or repeated time-synch, or don't make them visible to 
programs during the programs' runtimes.

b) allowing the scheme system to do whatever the underlying 
operating system does (including the weird Unix strategy of 
just slowing down the "calendrical" clock for some 
indeterminate period).  

c) If you have the information available and you want to go 
for canonically correct behavior, the preferred mechanism 
would be to have the clock of calendrical seconds just plain 
stop (milliseconds and all) for exactly one second while the 
leap second passes.  

Calendrical seconds are for when you want something to happen 
at an exact time relative to the "consensus" time and date 
in the future -- say, one-tenth of a second after the stock 
market opens on July first of 2012, when your options are 
tradeable. We can assume (I hope) that a trader who's making
such demands of a system will make those demands of a system
whose clock/calendar is kept tightly in synch with that of 
the market he wants to trade on, whatever the general strategy
of the operating system underlying his system.  

Absolute seconds are for when you want something to happen 
exactly a particular number of seconds into the future -- say, 
at the exact moment when you expect Charon or Sedna to occlude 
a particular star so you can get a good measurement.  And we 
can also assume (I hope) that an astronomer who's making such
demands of a system will make those demands of a system whose
absolute clock is kept in tight synch with the most accurate 
counter of seconds she can find (probably the atomic clock 
signal).   

Bear



_______________________________________________
r6rs-discuss mailing list
r6rs-discuss@lists.r6rs.org
http://lists.r6rs.org/cgi-bin/mailman/listinfo/r6rs-discuss

Reply via email to