Hi everybody,
I have just talked with Thomas about how to benchmark VIFF and what we
should try measuring. We found it difficult to come up with good
answers, so now we ask here.
We talked about the goals of benchmarking:
* Determine bottle-necks. It wont do us any good if the carefull
optimizations done by the CACE project cannot be seen because the
time is spent in other parts of the code. So this suggests that we
need to determine:
- Amount of time spent on local computations: how much of the time
is spent on book-keeping and how much on actual computations?
- Amount of time spent waiting on the network. Since we have
everything centralized in the Twisted event loop, we might be able
to simply hook into that and make it measure its idle time.
* Track performance over time in order to find performance
regressions.
For this we talked about making a system which does a nightly
benchmark run (if there has been a new commit) and then stores the
result in a database. From that we can then generate nice HTML pages
with graphs and numbers.
I have already made a script which uses SSH to start any number of
playes here on DAIMI, and I've used it to test up to 25 players (it
took 15 ms on average for a 32-bit passively secure multiplication,
and 19 ms for an actively secure one). It should be fairly easy to
extend this to run nightly and make graphs from the results.
Please come with your other good ideas -- or let us know why the above
are bad ideas! :-)
--
Martin Geisler
VIFF (Virtual Ideal Functionality Framework) brings easy and efficient
SMPC (Secure Multi-Party Computation) to Python. See: http://viff.dk/.
_______________________________________________
viff-devel mailing list (http://viff.dk/)
[email protected]
http://lists.viff.dk/listinfo.cgi/viff-devel-viff.dk