Tim Peters <t...@python.org> added the comment:

Dennis, would it be possible to isolate some of the cases with more extreme 
results and run them repeatedly under the same timing framework, as a test of 
how trustworthy the _framework_ is? From decades of bitter experience, most 
benchmarking efforts end up chasing ghosts ;-)

For example, this result:

length=3442, value=ASXABCDHAB...  | 289 us  | 2.36 ms: 8.19x slower (+719%) 

Is that real, or an illusion?

Since the alphabet has only 26 letters, it's all but certain that a needle that 
long has more than one instance of every letter. So the status quo's "Bloom 
filter" will have every relevant bit set, rendering its _most_ effective 
speedup trick useless. That makes it hard (but not impossible) to imagine how 
it ends up being so much faster than a method with more powerful analysis to 
exploit.

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue41972>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to