On 12 Sep 2004, at 15:46, Phil Steitz wrote:
i've been thinking about the problem of proving performance improvements by using unit tests for a while now. i'd really like to be able to be able to create reports about the current performance of library code. maybe it'd be possible to use some kind of normalization to eliminate (or at least reduce) platform specific differences. i'd be interested to hear comments from other folks about this (or ideally, hear about a tool out there which does this ;)
I have not personally used it, but JUnitPerf <http://www.clarkware.com/software/JUnitPerf.html> looks like it is designed to measure performance changes in unit tests. It is BSD-licensed.
The approach used in o.a.c.beanutils.BeanUtilsBenchCase -- creating a separate "microbenchmarks" test case with timing included -- could probably also be applied to [digester] and other commons components.
I have no clue how one would go about eliminating platform-specific differences. Could be the best we can do is make microbenchmark test suites available and set up a place where users can report results on different components for different platforms. The Wiki is a natural place to report things; but does it support forms well enough to organize the results?
i did take a quick look at JUnitPerf a while ago. i haven't been through it in detail but though it looks like it would work well in a commercial situation running on a central continuous integration box, open source development needs something that can run on different platforms.
i wonder whether it would be possible to calibrate JVM performance using a series of tests and then use that rating to work out what the timings should be on different platforms.
- robert
--------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
