On Sun, May 20, 2018 at 11:20 PM, Daniel Foerster <pydsig...@gmail.com>
wrote:

> I would guess that a significant amount of the gain is that he doesn't
> have to len() the list every iteration, plus the item unpacking occurs in C.
>
​`len(...)` should be constant-time (stored with array), but indeed caching
it in a variable seems to help significantly.​ I think the fact that more
of the algorithm occurs in C is the main driver, though. It also explains
why the list comprehension is fastest.

It's also worth noting that the performance of all these algorithms is
milliseconds or tens of milliseconds, even for thousands of elements. This
means that which one is chosen probably doesn't matter—but it also means
that the profiling is necessarily more-approximate. I'd hesitate to read
too much into results on smaller element counts, even if they're fairly
repeatable.


> I don't know how much JIT would affect anything unless you ran the tests
> in PyPy.
>
​This was incorrect terminology on my part. I meant the interpreter, as my
tests at least were in CPython. I was thinking of the optimizations that
are done in the translation to bytecode, more than the fact that this is
interpreted bytecode instead of executed machine code.

Ian

Reply via email to