Bart Lateur <[EMAIL PROTECTED]> writes:

> Now, on those platforms without 64 bit support, a double float has a lot
> more mantissa bits than 32, typically 50-something (on a total of 64
> bits). This means that all integers with up to more than 50 significant
> bits can exactly be represented. That would be a lot better than the
> current situation of 32 bits.

Everything I've heard from anyone who's done work on time handling
libraries is that you absolutely never want to use floating point for
time.  Even if you think that the precision will precisely represent it,
you don't want to go there; floating point rounding *will* find a way to
come back and bite you.

Seconds since epoch is an integral value; using floating point to
represent an integral value is asking for it.

As an aside, I also really don't understand why people would want to
increase the precision of the default return value of time to more than
second precision.  Sub-second precision *isn't available* on quite a few
platforms, so right away you have portability problems.  It's not used by
the vast majority of applications that currently use time, and I'm quite
sure that looking at lots of real-world Perl code will back me up on this.
It may be significantly more difficult, complicated, or slower to get at
on a given platform than the time in seconds.  I just really don't see the
gain.

Sure, we need an interface to sub-second time for some applications, but
please let's not try to stuff it into a single number with seconds since
epoch.

-- 
Russ Allbery ([EMAIL PROTECTED])             <http://www.eyrie.org/~eagle/>

Reply via email to