On Tuesday 26 March 2002 14:07, Jukka Santala wrote:
> On Mon, 25 Mar 2002, Dalibor Topic wrote:
> > Sticking to a standard environment would just limit the number of
> > people able to contribute results.
>
> That's one of the things I'm afraid of. The last thing we want is people
> upgrading their compiler/libraries on the run, and forgetting to mention
> it in the benchmarks, leading to everybody think they've broken something
> terribly, or found a new optimization.

O.K. Writing a script or a java program that compiles --version information 
for tools and libraries used should be possible. I'm in favor of automating 
the process as much as possible: 'make benchmark' and you get a bench.txt 
file at the end with all relevant configuration information and the results. 
Put a benchmark toolchain definition for that release somewhere where it gets 
parsed by the benchmark script. Let the script flag "non-standard" entries.

> > What kind of contribution process would be suitable for such an
> > effort? Emails to a specific mailing list? Web forms?
>
> Well, I was most initially thinking of having both a gnuplot graph of the
> development of the benchmark performance over time, as well as a textual
> log of the specific results. In the most simplest case, this would only
> require an e-mail notification of the location of the graphs to this list,
> and the URL could then be added to the official web-page if deemed
> useful/reliable enough. If enough data is provided, it might be worth it
> just to write a script on the web-site machine, that would gather the
> benchmark logs and collate combined graphs from them.

sounds right.

> But, as implied, if we're aiming for just "any benchmark", for posterity
> and some pretend-comparisions between system perfomances, then all bets
> are off, and we should probably have some sort of web-form for users to
> input in that "Herez the rezultz I gotz from running my own
> number-calculation benchmark, calculating how many numbers there are from
> 1 to 1000 while playing Doom in another Window. This is OBIVIOUSLY what
> everybody else will be doing with the VM's, so I think this counts. I'm
> not sure I even have a compiler." ;)

Uh, no, thanks :)

But that raises an interesting question: which benchmarks would matter? For 
example, I assume that benchmarking kaffe's as a platform for apache.org's 
java projects might be interesting, since a couple of responses to the "most 
popular applications" thread had those mentioned. What could Ashes cover?

dalibore topic

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com

Reply via email to