Josh Rosenberg added the comment:

You're correct about what is going on; aside from bypassing a bounds check 
(when not compiled with asserts enabled), the function it uses to get each 
index is the same as that used to implement indexing at the Python layer. It 
looks up the getitem function appropriate to the type code over and over, then 
calls it to create the PyLongObject and performs a rich compare.

The existing behavior is probably necessary to work with array subclasses, but 
it's also incredibly slow as you noticed. Main question is whether to keep the 
slow path for subclasses, or (effectively) require that array subclasses 
overriding __getitem__ also override he rich comparison operators to make them 
work as expected.

For cases where the signedness and element size are identical, it's trivial to 
acquire readonly buffers for both arrays and directly compare the memory (with 
memcmp for EQ/NE or size 1 elements, wmemcmp for appropriately sized wider 
elements, and simple loops for anything else).

----------
nosy: +josh.r

_______________________________________
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue24700>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to