Hi there, After more than 2 years of (time-to-time) development and about that much time of use, I'd like to announce CalipeL, a tool for benchmarking and monitoring performance regressions.
The basic ideas that drove the development: * Benchmarking and (especially) interpreting benchmark results is always a monkey business. The tool should produce raw numbers, letting the user to use whichever statistics she need to make up (desired) results. * Benchmark results should be kept and managed at a single place so one can view and retrieve all past benchmark results pretty much the same way as one can view and retrieve past versions of the software from a source code management tool. Features: - simple - creating a benchmark is as simple as writing a method in a class - flexible - a special set-up and/or warm-up routines could be specified at benchmark-level as well as set of parameters to allow fine-grained measurements under different conditions - batch runner - contains a batch runner allowing one to run benchmarks from a command line or at CI servers such as Jenkins. - web - comes with simple web interface to gather and process benchmark results. However, the web application would deserve some more work. Repository: https://bitbucket.org/janvrany/jv-calipel http://smalltalkhub.com/#!/~JanVrany/CalipeL-S (read-only export from the above and Pharo-specific code) More information: https://bitbucket.org/janvrany/jv-calipel/wiki/Home I have been using CalipeL for benchmarking and keeping track of performance of Smalltalk/X VM, STX:LIBJAVA, a PetitParser compiler and other code I was working over the time. Finally, I'd like to thank to Marcel Hlopko for his work on the web application and Jan Kurs for his comments. I hope some of you may find it useful. If you have any comments or questions, do not hesitate and let me know! Regards, Jan
