Kevin, Bela;

First thanks for the offer of help, Bela. Being downunder, sometimes you
feel a bit isolated - where is the Australian JBoss user community? ;)
Anyway, I'm not sure where things are headed with testing but help is
always appreciated.

The testing question is a big one. It depends on what you are trying to
achieve.

If it is steady state operation, I would say that a cluster of JMeter
clients can generate a reasonable load that you can tune. Based on the
threading issues discovered here, I wouldn't have the client on the same
machine under test if you are using Linux. There would be too many
questions on client load versus server load. One of the nice things about
hindsight is that the free-running client load testing told us something
about what to expect when looking at the JBoss-client results. So for
completeness, you would probably want a calibration sample of your test
rig with the minimal JBoss interaction as we had done in the JBoss-client
test. Then you can look at responsiveness under a certain load
characteristic - for example 50 requests per second what is the average
response time, what are the maxima and minima. Dial it up and check again.
As long as the test rig can generate the requested load level (calibration
will tell you when things might start to deform in the test generator),
you will be reasonably certain about the performance of the system under
test.

If you are just stress testing the system to check raw throughput, then
you can adopt the load and transmit operation we employed for this test.
Except we were hoping for less parallel thread coupling effects.  You were
able to see from these results for example, that the name server and the
EJB invocation were not unduly stressed with an 8 client-load continually
making requests for an extended period (for the IBM SDK).  The predominant
response characteristic was that of the client system.

We haven't looked at your particular idea but any measurement is better
than no measurement at all. Besides, I think there needs to be something
beyond the ECperf-type tests. They tell you how the big picture is
supposed to work but don't help you work out where things fall apart.

I would suggest that measurement data if possible, is not transmitted from
the server during testing (unless you can guarantee its affect would be
minimal to the testing results).

I think the best thing is to flesh out what you want to come away with
from the testing. I'm sure there are others here in the forum with good
test and measurement experience who can contribute to the cause. An
example of things to think about - do you want to test solely EJB
performance? Do you want to couple it with a DB? How do you separate DB
from Server issues?

Some nice things to see on the cluster testing is the impact of the load
distribution algorithm. Is there a step-over point? How would you test it?
I think it is a matter of asking what you want to observe and then
determining a test scenario that would allow you to see it.

Now if some Australian company can donate a Sun multi-CPU Blade server for
testing, I could check the performance of the Java thread implementation
on a Sun system. :)

Anyway, that's a starter for discussions. I know I haven't directly
answered your question about how to proceed but I am just trying to get a
feeling about the exact things you want to measure.

Best regards,

JonB

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to