On Tue, Aug 18, 2015 at 3:22 PM, Alexander Belopolsky <
[email protected]> wrote:
>
> [Alexander Belopolsky]
>
>> However, I don't see this additional freedom as a big complication.
>>> Even in the common case, it may be easier to implement d(x, y) than to
>>> figure out f(x). The problem with f(x) is that it is the UTC offset
>>> as a function of local time while most TZ database interfaces only
>>> provide UTC offset as a function of UTC time. As a result, it is
>>> often easier to implement d(x, y) (for example, as d(x, y) = g(x) -
>>> g(y)) than to implement f(x).
>>>
>>
>> [Guido van Rossum]
>
>> This discussion sounds overly abstract. ISTM that d(x, y) in timeline
>> arithmetic can be computed as x.timestamp() - y.timestamp(), (and
>> converting to a timedelta).
>>
>
> It can be, but currently, x.timestamp() is implemented as (t -
> datetime(1970, 1, 1, tzinfo=timezone.utc)).total_seconds(), so you end up
> defining datetime subtraction in terms of datetime subtraction.
>
> Let's consider a specific example. Suppose I want to implement a very
> simple timezone like US/Eastern, where I have some simple rules (with a few
> historical variations) that given year, month, day, hour and the "first"
> flag will tell me whether DST is in effect. For such timezone, I can
> easily write a function in C like this:
>
> long long hours_between(int year1, int month1, int day1, int hour1, int
> first1,
> int year2, int month2, int day2, int hour2, int
> first2)
> {
> return 24 * (jd(year2, month2, day2) - jd(year1, month1, day1)) +
> hour2 - dst(year2, month2, day2, hour2, first2) -
> hour1 + dst(year1, month1, day1, hour1, first1);
> }
>
> where jd and dst are the Julian day and DST functions, each taking under
> 30 machine cycles to execute. With PEP 500 approach, you have a couple of
> attribute accesses and unpacking of two datetime buffers between t1 - t2 in
> Python and the hours_between function, and then you are a few operations
> with seconds and microseconds and one new_delta call away from the result.
>
> [Guido van Rossum]
>
>> Similar for adding a datetime and a timedelta. Optimizing this should be
>> IMO the only question is how should a datetime object choose between
>> classic arithmetic[1] and timeline arithmetic. My proposal here is to make
>> that a boolean property of the tzinfo object -- we could either use a
>> marker subclass or an attribute whose absence implies classic arithmetic.
>>
>
> With this proposal, we will need something like this:
>
> def __sub__(self, other):
> if self.tzinfo is not None and self.tzinfo.strict:
> self_offset = self.utcoffset()
> other_offset = other.utcoffset()
> naive_self = self.replace(tzinfo=None)
> naive_other = other.replace(tzinfo=None)
> return naive_self - self_offset - naive_other + other_offset
> # old logic
>
> So we need to create six intermediate Python objects just to do the math.
> On top of that, we need the utcoffset() method which is a pain to write in
> C, so we will wrap our optimized dst() function and compute utcoffset() as
> dst(t) + timedelta(hours=-5), creating four more intermediate Python
> objects. At the end of the day, I will not be surprised if aware datetime
> subtraction is 10x slower than naive and every Python textbook recommends
> to avoid doing arithmetic with aware datetime objects.
>
>
I doubt it. Most textbooks aren't that concerned with saving a few cycles.
(Do most Python textbooks even discuss the cost of object creation or
function calls?) Anyways, wouldn't PEP 500 be even slower?
--
--Guido van Rossum (python.org/~guido)
_______________________________________________
Datetime-SIG mailing list
[email protected]
https://mail.python.org/mailman/listinfo/datetime-sig
The PSF Code of Conduct applies to this mailing list:
https://www.python.org/psf/codeofconduct/