2017-04-04 12:06 GMT+02:00 Serhiy Storchaka <storch...@gmail.com>: > I consider it as a benchmark of Python interpreter itself.
Don't we have enough benchmarks to test the Python interpreter? I would prefer to have more realistic use cases than "reimplement pickle in pure Python". "unpickle_pure_python" name can be misleading as well to users exploring speed.python.org data, no? By the way, I'm open to add new benchmarks ;-) > Unfortunately the Python code is different in different versions. Maybe > write a single cross-version pickle implementation and use it with these > benchmarks? That's another good reason to remove the benchmark :-) Other bencharks use pinned versions of dependencies (see performance/requirements.txt), so the code should be more and less the name on all Python versions. (Expect code differences between Python 2 and Python 3.) > And indeed, they > show a progress from 3.5 to 3.7. A slow regression in unpickling in 3.5 may > be related to adding more strong validation (but may be not related). 3.7 is faster than 3.5. Are you running benchmarks with LTO + PGO + perf system tune? https://speed.python.org/comparison/?exe=12%2BL%2Bmaster%2C12%2BL%2B3.5&ben=647%2C675&env=1&hor=true&bas=12%2BL%2B3.5&chart=normal+bars Victor _______________________________________________ Speed mailing list Speed@python.org https://mail.python.org/mailman/listinfo/speed