On 2016-02-11 6:26 PM, Antoine Pitrou wrote:
On Thu, 11 Feb 2016 18:16:23 -0500
Yury Selivanov <yselivanov...@gmail.com>
wrote:
Yes, spectral_norm is micro-benchmark, but still, there is a lot of
python code out there that does some calculation in pure Python not
involving numpy or pypy.
Can you clarify "a lot"?
Any code that occasionally uses "int [op] int" code.  That code becomes
faster (especially if it's small ints).  In tight loops significantly
faster (that's what spectral_norm is doing).
I agree for int addition, subtraction, perhaps multiplication. General
math on small integers is not worth really improving, though, IMO.

Look, 21955 optimizes the following ops (fastint6.patch):

1. +, +=, -, -=, *, *= -- the ones that py2 has a fast path for

2. //, ,//=, %, %-, >>, >>=, <<, <<= -- these ones are usually
used only on ints, so nothing should be affected negatively

3. /, /= -- these ones are used on floats, ints, decimals, etc


If we decide to optimize group (1), I don't see why we can't
apply the same macro to group (2).  And then it's just
group (3, true division) that we might or might not optimize.

So to me, the real question is: should we optimize
"long [op] long" at all?

+ and - are very common operations.  If fastint6 manages to
make numpy code (not microbenchmarks, but some real
algorithms) even 3-5% slower - then let's just close 21955
as "won't fix".

The problem is that we don't have any good decimal or numpy
benchmark.  telco is so unstable, that I take it less seriously
than spectral_norm.


Thanks,
Yury
_______________________________________________
Speed mailing list
Speed@python.org
https://mail.python.org/mailman/listinfo/speed

Reply via email to