On 11 October 2016 at 03:15, Elliot Gorokhovsky
<elliot.gorokhov...@gmail.com> wrote:
> There's an option to provide setup code, of course, but I need to set up
> before each trial, not just before the loop.

Typically, I would just run the benchmark separately for each case,
and then you'd do

# Case 1
python -m perf timeit -s 'setup; code; here' 'code; to; be; timed; here'
[Results 1]
# Case 2
python -m perf timeit -s 'setup; code; here' 'code; to; be; timed; here'
[Results 2]

The other advantage of doing it this way is that you can post your
benchmark command lines, which will allow people to see what you're
timing, and if there *are* any problems (such as a method lookup that
skews the results) people can point them out.

Paul
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to