Yes, that's the right way to define it (and PEPs should primarily concern
themselves with crisp definitions).

Isn't it so that you could get timeline arithmetic today by giving each
datetime object a different tzinfo object?

On Tue, Aug 18, 2015 at 10:52 PM, Tim Peters <[email protected]> wrote:

> [Guido]
> > ...
> > This  discussion sounds overly abstract. ISTM that d(x, y) in timeline
> > arithmetic can be computed as x.timestamp() - y.timestamp(), (and
> converting
> > to a timedelta).
>
> As someone else might say, if you want timestamps, use timestamps ;-)
>
> I want to discourage people from thinking of it that way, because it
> only works in a theoretical framework abstracting away how arithmetic
> actually behaves.  Timestamps in Python suck in a world of
> floating-point pain that I tried hard to keep entirely out of datetime
> module semantics (although I see float operations have increasingly
> wormed their way in).
>
> Everyone who thinks about it soon realizes that a datetime simply has
> "too many bits" to represent faithfully as a Python float, and so also
> as a Python timestamp.  But I think few realize this isn't a problem
> confined to datetimes only our descendants will experience.  It can
> surprise people even today.  For example, here on my second try:
>
> >>> d = datetime.now()
> >>> d
> datetime.datetime(2015, 8, 18, 23, 8, 54, 615774)
> >>> datetime.fromtimestamp(d.timestamp())
> datetime.datetime(2015, 8, 18, 23, 8, 54, 615773)
>
> See?  We can't even expect to round-trip faithfully with current
> datetimes.  It's not really that there "aren't enough bits" to
> represent a current datetime value in a C double, it's that the
> closest binary float approximating the decimal 1439957334.615774 is
> strictly less than that decimal value.  That causes the microsecond
> portion to get chopped to 615773 on the way back.  It _could_ be
> rounded instead, which would make roundtripping work for some number
> of years to come (before it routinely failed again), but rounding
> would cause other surprises.
>
> Anyway, "the right way" to think about timeline arithmetic is the way
> the sample code in PEP 500 spells it:: using classic datetime
> arithmetic on datetimes in (our POSIX approximation of) UTC,
> converting to/from other timezones in the obvious ways  There are no
> surprises then (not after PEP 495-compliant tzinfo objects exist),
> neither in theory nor in how code actually behaves (leaving aside that
> the results won't always match real-life clocks).
>
> If you want to _think_ of that as being equivalent to arithmetic using
> theoretical infinitely-precise Python timestamps, that's fine.  But it
> also means you're over 50 years old and the kids will have a hard time
> understanding you ;-)
>



-- 
--Guido van Rossum (python.org/~guido)
_______________________________________________
Datetime-SIG mailing list
[email protected]
https://mail.python.org/mailman/listinfo/datetime-sig
The PSF Code of Conduct applies to this mailing list: 
https://www.python.org/psf/codeofconduct/

Reply via email to