On Fri, Jul 01, 2016 at 17:16:08 +0100, Alex Bennée wrote: (snip) > run 1: ret=0 (PASS), time=4.755824 (1/1) > run 2: ret=0 (PASS), time=4.756076 (2/2) > run 3: ret=0 (PASS), time=4.755916 (3/3) > run 4: ret=0 (PASS), time=4.755853 (4/4) > run 5: ret=0 (PASS), time=4.755929 (5/5) > Results summary: > 0: 5 times (100.00%), avg time 4.755920 (0.000000 deviation)
(snip) > run 1: ret=0 (PASS), time=9.761559 (1/1) > run 2: ret=0 (PASS), time=9.511616 (2/2) > run 3: ret=0 (PASS), time=9.761713 (3/3) > run 4: ret=0 (PASS), time=10.262504 (4/4) > run 5: ret=0 (PASS), time=9.762059 (5/5) > Results summary: > 0: 5 times (100.00%), avg time 9.811890 (0.060150 deviation) This is a needless diversion, but I was explaining this stuff today to a student so couldn't help but notice. The computed deviations seem overly small. For instance, the corrected sample standard deviation ( https://en.wikipedia.org/wiki/Standard_deviation ) (which is usually referred to as "standard deviation", or "error") for the last test should be 0.2742 instead of 0.06. How are they being computed? I tried to find the source of your script (in the kvm-unit-tests repo) but couldn't find it. Thanks, Emilio