I would guess that a significant amount of the gain is that he doesn't have
to len() the list every iteration, plus the item unpacking occurs in C. I
don't know how much JIT would affect anything unless you ran the tests in
PyPy.

On Sun, May 20, 2018, 23:51 Ian Mallett <i...@geometrian.com> wrote:

> On Sun, May 20, 2018 at 9:25 PM, Daniel Foerster <pydsig...@gmail.com>
> wrote:
>
>> The relevance of N exponent versus the discarded coefficients depends on
>> how big N may be. With the likely sizes of N in a Pygame class, the
>> difference between the algorithms seems probably negligible. A quick test
>> with 20% deletion shows that your algorithm becomes more efficient around
>> N=7000, but either can handle well over N=20000 in 10ms. Of course, a list
>> comprehension can handle almost 100,000 in the same 10ms so the point is
>> all rather moot in real-world code ;)
>>
>
> ​These results were surprising to me, so I coded up a test (see attached!).
>
> I measure the breakeven point at a similar level, 5500 elements. MrGumm's
> version is functionally equivalent to mine, but seems to run much faster
> than either of ours (mine and Daniel's)—almost as fast as the list
> comprehension, which I confirm to be fastest. My guess is that my `while`
> loop (instead of `for` over the elements explicitly) is confusing the
> Python JIT into not optimizing somehow.
>
> Also, this discussion on StackOverflow is relevant
> <https://stackoverflow.com/questions/5745881/fast-way-to-remove-a-few-items-from-a-list-queue>
> .
>
> Ian​
>

Reply via email to