When performance testing the client JRE we do two things which seem to
help:
1) check out both the latest and your older / baseline releases of
your code. Test them *both*. This lets you plot how you have improved,
regardless of what computer your tests are running on. It's also the
only
, September 15, 2009 10:43:06 AM
Subject: [The Java Posse] Re: Problems with continuous performance testing,
solutions anyone?
When performance testing the client JRE we do two things which seem to
help:
1) check out both the latest and your older / baseline releases of
your code. Test them *both
--
*From:* Joshua Marinacci jos...@marinacci.org
*To:* javaposse@googlegroups.com
*Sent:* Tuesday, September 15, 2009 10:43:06 AM
*Subject:* [The Java Posse] Re: Problems with continuous performance
testing, solutions anyone?
When performance testing the client JRE we do two things
] Re: Problems with continuous performance
testing, solutions anyone?
When performance testing the client JRE we do two things which seem to
help:
1) check out both the latest and your older / baseline releases of
your code. Test them *both*. This lets you plot how you have improved
Robert Casto wrote:
That depends of course on what you are trying to do.
Joshua wants to measure average system performance while things are
humming along.
If you want to know how long it takes to startup, then you keep the
data. I tend to separate the two in reports I give to companies.
You might take a look at Japex, which was developed at Sun for
benchmarking some of the XML libraries. It offers a harness in which
you can run tests and gives you a sort of framework by which to handle
initialization and warmup issues, plus it can compare between runs and
against a baseline. I
where i work, we use an isolated (non virtual) hudson for performance
test. its an old machine, but we're only interested on relative times
(each run takes around 6 times). you might want to virtualize the os's
and use hudson locks.
On 9/16/09, Patrick pdoubl...@gmail.com wrote:
You might take