Here's my initial attempt at automating benchmarks and graph generation in VIFF. Some parts of it is borrowed from the benchmark.py script that's alread in the repository.
The main goal is to make it easy to write, run, and generate graphs for distributed VIFF benchmarks while at the same time not limiting the kind of benchmarks that could be made. Another design goal is that the code should scale as the number of benchmarks increase. An approah is chosen where benchmark data is collected in a central database rather than as a bunch of text files. My feeling is that this eases data maintainability and production of more complex statistics. The code is by no means finished. Lots of TODO's are still there, documentation can be improved, and I haven't done much to remove trailing whitespaces, etc. However, I've reached a state where I'm actually able to use it to run benchmarks on the DAIMI hosts while data is reported to a MySQL database at my computer at home. I post the patch now hoping that an early review will result in time saved, so please feel free to comment on the code. I'd be happy to hear about bugs and ideas for improving it. I'm quite a newbie in Python, so if I've done something very non-pythonic, please let me know, too :-) A good place to start is example.py and the benchmark examples in examples/. The list of featurs and issues in the top of suite.py is also a good starting point. _______________________________________________ viff-patches mailing list [email protected] http://lists.viff.dk/listinfo.cgi/viff-patches-viff.dk
