On Thu, 21 Mar 2002, Dalibor Topic wrote:
> Optimizations in core & library code should be beneficial to all platforms. 
> But I guess you are not talking about those, since they are not platform 
> specific.

Well both, ofcourse. Even non-platform-specific optimizations can affect
different platforms differently, and sometimes negatively, which is why
it's good to see the effect on platforms you don't have access to
interactively.

> I don't think I understood the term "breaking some optimization" properly. Do 
> you mean "breaking some benchmark" ? If so, yes, meaningful benchmarks will 

Now you're confusing me, as well. As noted above, modifying code on one
platform can affect the performance of other platforms (or other code
paths) negatively, and I doubt most developers bother to benchmark even on
their own system after every little change that could've slowed things
down.

Hence it would be good idea to provide performance history graphs on
different platforms so that people can see that "Aha, so something
committed on April the 1st turned the Gibbering test 50% slower on both
platforms X and Y, must have been the new gibber-collector I submitted..."

Not to mention that performance-tests also give good overall stress-test
to the VM and can turn out any critical conformance/stability problems as
well. Altough I agree a real conformance test s better for that use, and
would be easy to add. However, it's likely to be much less
platform-specific, so most of the time it should be enough to require
people to run regression-test suite before every CVS commit.

> In general, if people run kaffe on benchmarks & put the results on the web, I 
> assume that Jim will link to them. If people volunteer to do so regularly, 

That is somewhat what I had in mind. I'm bit reluctant to volunteer, as I
don't know how long I can contribute, but I should be able to put up
performance-tracking for AMD Duron and StrongARM at least. Intel platforms
should be relatively easy to come by.

 -Jukka Santala

Reply via email to