Serhiy Storchaka <storchaka+cpyt...@gmail.com> added the comment:

The difference around 10% looks very strange. Tokenizing is just one of parts 
of the compiler, and unlikely it is a narrow way. There are input/output, many 
memory allocations, encoding and decoding, many iterations and recursions. Are 
you sure that you run benchmarks in the same environment, multiple times, and 
got stable results? Could you provide a script that reproduces this?

----------
nosy: +serhiy.storchaka

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue39150>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to