On Fri, Apr 3, 2009 at 10:50 AM, Antoine Pitrou <solip...@pitrou.net> wrote:
> Collin Winter <collinw <at> gmail.com> writes:
>>
>> - I wish PyBench actually did more isolation.
>> Call.py:ComplexPythonFunctionCalls is on my mind right now; I wish it
>> didn't put keyword arguments and **kwargs in the same microbenchmark.
>
> Well, there is a balance to be found between having more subtests and keeping 
> a
> reasonable total running time :-)
> (I have to plead guilty for ComplexPythonFunctionCalls, btw)

Sure, there's definitely a balance to maintain. With perf.py, we're
going down the road of having different tiers of benchmarks: the
default set is the one we pay the most attention to, with other
benchmarks available for benchmarking certain specific subsystems or
workloads (like pickling list-heavy input data). Something similar
could be done for PyBench, giving the user the option of increasing
the level of detail (and run-time) as appropriate.

>> - I would like to see PyBench incorporate better statistics for
>> indicating the significance of the observed performance difference.
>
> I see you already have this kind of measurement in your perf.py script, would 
> it
> be easy to port it?

Yes, it should be straightforward to incorporate these statistics into
PyBench. In the same directory as perf.py, you'll find test_perf.py
which includes tests for the stats functions we're using.

Collin
_______________________________________________
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com

Reply via email to