Right, that sounds good, but there's just one thing I don't understand that's keeping me from using it. Namely, I would define a benchmark list L in my setup, and then I would have code="F=FastList(L);F.fastsort()". The problem here is I'm measuring the constructor time along with the sort time, right, so wouldn't that mess up the benchmark? Or does timeit separate the times?
On Tue, Oct 11, 2016, 2:22 AM Paul Moore <p.f.mo...@gmail.com> wrote: > On 11 October 2016 at 03:15, Elliot Gorokhovsky > <elliot.gorokhov...@gmail.com> wrote: > > There's an option to provide setup code, of course, but I need to set up > > before each trial, not just before the loop. > > Typically, I would just run the benchmark separately for each case, > and then you'd do > > # Case 1 > python -m perf timeit -s 'setup; code; here' 'code; to; be; timed; here' > [Results 1] > # Case 2 > python -m perf timeit -s 'setup; code; here' 'code; to; be; timed; here' > [Results 2] > > The other advantage of doing it this way is that you can post your > benchmark command lines, which will allow people to see what you're > timing, and if there *are* any problems (such as a method lookup that > skews the results) people can point them out. > > Paul >
_______________________________________________ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com