On Sun, 2009-12-06 at 16:22 -0800, chromatic wrote:
> I don't trust time-related benchmarkings for this reason.  Yes, stochastic 
> analysis can give you some degree of probability that comparisons are all 
> like-to-like, but the output of a single Callgrind run is much easier to 
> compare (and gives a better explanation of *why* and *how* and *where* 
> performance has changed).

I wasn't suggesting we should try to determine root causes via black box
testing, merely that regularly running a decent suite of microbenchmarks
will let you know when you actually *have* a problem to apply something
like Callgrind to.

A small tight benchmark can also allow you to use powerful profiling
tools more effectively, by: 1) getting rid of noise that may be hiding
the signal, and 2) usually being easier to tune in workload and run
time, thus allowing easier asymptotic behavior experimentation.


-'f


_______________________________________________
http://lists.parrot.org/mailman/listinfo/parrot-dev

Reply via email to