*Re-sending to the new thread:*

Hello Rana,

When I think of an optimization which gives 1% improvement on some simple
workload or 3% improvement on EM64T platforms only I doubt this can be
easily detected with a general-purpose test suite. IMO the performance
regression testing should have a specialized framework and a stable
environment which guarantees no user application can spoil the results.

The right solution might also be a JIT testing framework which would
understand the JIT IRs and check if some code patterns have been optimized
as expected. Such way we can guarantee necessary optimizations are done
independently of the user environment.

Thanks,
Pavel



On 9/14/06, Mikhail Loenko <[EMAIL PROTECTED]> wrote:

Hi Rana

2006/9/14, Rana Dasgupta <[EMAIL PROTECTED]>:
<SNIP>
>  One way to write the test would be to loop N times on a scenario that
> kicks in the optimization say, array bounds check elimination and then
loop
> N times a very similar scenario but such that the bounds check does not
get
> eliminated. Then the test should pass only if the difference in timing
is at
> least X on any platform.

I tried to create a similar test when was testing that resolved IP
addresses are
cached. Finally I've figured out that this test is not the best
pre-commit test as it
may accidentally fail if I run other apps on the same machine where I
run the tests.

And as you know unstable failure is not the most pleasant thing to deal
with :)

Thanks,
Mikhail


>  I have been forced to do this several times :-) So I couldn't resist
> spreading the pain.
>
> Thanks,
> Rana
>
>
>
> > On 14 Sep 2006 12:10:19 +0700, Egor Pasko < [EMAIL PROTECTED]>
wrote:
> > >
> > >
> > > Weldon, I am afraid, this is a performance issue and the test would
> > > show nothing more than a serious performance boost after the fix.
I'll
> > > find someone with a test like this :) and ask to attach it to JIRA.
> > > But .. do we need performance tests in the regression suite?
> > >
> > > Apart of this issue I see that JIT infrastructure is not so
> > > test-oriented as one would expect. JIT tests should sometimes be
more
> > > sophisticated than those in vm/tests and, I guess, we need a
separate
> > > place for them in the JIT tree.
> > >
> > > Many JIT tests are sensitive to various JIT options and cannot be
> > > reproduced in default mode. For example, to catch a bug in OPT with
a
> > > small test you will have to provide "-Xem opt" options. Thus, in a
> > > regression test we will need:
> > > (a) extra options to VM,
> > > (b) sources (often in jasmin or C++ (for hand-crafted IRs))
> > > (c) and even *.emconfig files to set custom sequences of
optimizations
> > >
> > > (anything else?)
> > > I am afraid, we will have to hack a lot above JUnit to get all
these.
> > >
> > > Let's decide whether we need a framework like this at the time. We
can
> > > make a first version quite quickly and improve it further on
as-needed
> > > basis. Design is not quite clear now, though it is expected to be a
> > > fast-converging discussion.
> > >
> > >
> > > --
> > > Egor Pasko, Intel Managed Runtime Division
> >
> >
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
>
>

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Reply via email to