Raymond Hettinger <raymond.hettin...@gmail.com> added the comment:

I would rather not do this.   It optimizes for the uncommon case where all the 
objects are identical.  The common case is slightly worse off because the 
identity test is performed twice, once before the call to Py_RichCompareBool() 
and again inside it.  Also, the PR adds a little clutter which obscures the 
business logic.  

Another thought, micro-benchmarks on the identity tests require some extra care 
because they are super sensitive to branch prediction failures (See 
https://stackoverflow.com/questions/11227809 ).  A more realistic dataset would 
be:

  x = 12345
  data = [x] * 100 + list(range(500))
  random.shuffle(data)
  data.count(x)

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue41347>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to