At 3:01 PM -0400 24/07/01, Scott R. Godin wrote:
>it took a minute and sixteen seconds to run a test of 2 x 5 seconds? god
>forbid I set it to 30 seconds.. Is there something wrong with Benchmark
>under 5.6.1a4 ?

running the same code on OSX (perl 5.6)  modfied with a couple of 
bechmark timers wrapping the test section, got  the following:

[localhost:~] robinmcf% /usr/bin/perl '-I/Users/robinmcf/Desktop' 
'/private/tmp/501/Cleanup At Startup/33936'
Benchmark: running addition_to_$#test, scalar_derived, each for at 
least 5 CPU seconds...
addition_to_$#test:  5 wallclock secs ( 5.49 usr +  0.00 sys =  5.49 
CPU) @ 282026.23/s (n=1548324)
scalar_derived:  2 wallclock secs ( 5.16 usr +  0.00 sys =  5.16 CPU) 
@ 395736.43/s (n=2042000)
the code took:70 wallclock secs (67.93 usr +  0.00 sys = 67.93 CPU)

It isn't surprising that effectively benchmarking appears to take 
longer, there's all kinds of stuff going on under the hood, but it is 
suspicious that the values given for the code snippets tested, change 
so dramatically. It leads me to wonder about the accuracy/validity of 
the results - by running the same test in the same script more than 
once, isn't that sort of the same as benchmarking more than 1 piece 
of code? Hence just how trustworthy is the Benchmark 
module?

Reply via email to