2016-07-04 16:17 GMT+02:00 Victor Stinner <victor.stin...@gmail.com>: > I modified the CPython benchmark suite to use my perf module: > https://hg.python.org/sandbox/benchmarks_perf
Updates with the release of perf 0.6. runner.py now has 3 commands: run, compare, run_compare * "run" runs benchmarks on a single python, result can be written into a file * "compare" takes two JSON files as input and compares them * "run_compare" is the previous default behaviour: run benchmarks on two python versions and then compare results. The results can also be saved into two JSON files The main advantage is that it's now possible to only run the benchmark suite once on the baseline python, rather than having to run it each time. So each comparison to a changed python (run+compare) should be simply twice faster. It also becomes possible to exchange full benchmark results (all samples of all processes) as files, rather than just summaries (median +- std dev lines) as text. TODO: * update remaining benchmarks (3 special benchmarks are currently broken) * rework the code to compare benchmarks * repair memory tracking feature? * continue the implementation using virtual environments and external dependencies Victor _______________________________________________ Speed mailing list Speed@python.org https://mail.python.org/mailman/listinfo/speed