On 2005-03-13, Michael G Schwern <[EMAIL PROTECTED]> wrote:
>
> We can just check.
>
> $ perl -MBenchmark -wle 'timethis(10, sub { `perl -wle "rand for 1..1000000"`
> })'
> timethis 10: 11 wallclock secs ( 0.01 usr 0.00 sys + 8.64 cusr 0.14 csys =
> 8.79 CPU) @ 1000.00/s (n=10)
>
> So the time spent in a fork counts as cumulative user time and is benchmarked.
> The advantage of using Benchmark over just time is you can run the command
> multiple times and take advantage of Benchmark's built-in comparision
> function, cmpthese().
I'm sorry-- I could have made this more productive by posting my own Benchmark
code in the first place. Look what happens when cmpthese is used. The
results look nonsensical to me:
perl -MBenchmark -wle 'Benchmark::cmpthese(10, {A => sub { `perl -wle "rand for
1..1000000"` }, B => sub { `perl -wle "rand for 1..500000"`}})'
Benchmark:
timing 10 iterations of
A, B
...
A: 9 wallclock secs ( 0.00 usr 0.00 sys + 7.80 cusr 0.04 csys =
7.84 CPU)
B: 4 wallclock secs ( 0.00 usr 0.01 sys + 3.51 cusr 0.05 csys =
3.57 CPU) @ 1280.00/s (n=10)
Rate B A
B 1280/s -- -100%
A 10000000000000000/s 781250000000000% --
########################
The ration I care about is simple: 9 seconds versus 4.
The summary report looks broken.
For darcs, often running the test /once/ will be enough to find out of a
function is within the ballpark of reasonable or not.
Mark
--
http://mark.stosberg.com/