I investigated this further and found that there definitely is a problem in 1.0 with large messages using RPC encoding. With my particular test data it started showing up at the 320KB message size and got exponentially worse with larger sizes. I think I've tracked this down, and have entered a bug report and fix against the offending code. In my tests the fix keeps performance stable at least into the 1.3MB range.

That done, I figured I should correct my earlier, overly- (or at least prematurely-) optimistic, statement about Axis performance with large messages. :-)

- Dennis

Dennis M. Sosnoski
Enterprise Java, XML, and Web Services Support
http://www.sosnoski.com

Dennis Sosnoski wrote:

In my own tests (running client and server on a single system) I found Axis performance went up at first as I increased the message size, then basically leveled off. Here's what my raw results look like:

Message size Roundtrip Time (ms.)
10KB 107
20KB 162
40KB 289
80KB 491
160KB 981
320KB 2000

Martin Jericho wrote:

I was doing some benchmarking to test how much it would impact on
performance to break up a single, large request into several smaller ones.
I was expecting of course that for a fixed volume of data, dividing it into
more separate messages would increase the overheads and make things slower.

What I found was quite surprising. It seems that once a message gets bigger
than 20kb, the response time increases at a rate much greater than the
linear relationship one would expect. I did some tests with a bean
containing an array of other beans. The size of the message with no array
elements is 1571 bytes, and each array element is 772 bytes.

The times recorded are from calling the service method on the client until
receiving the response back from the server (the response is just a string).

The results were as follows:

Number of calls, Number of Array Items, Total Response Time in Seconds
0001, 1000, 20.7
0002, 0500, 13.6
0004, 0250, 9.9
0005, 0200, 9.7
0010, 0100, 7.6
0020, 0050, 7.2
0040, 0025, 6.8
0050, 0020, 6.9
0100, 0010, 7.3
0200, 0005, 9.4
0250, 0004, 10.5
0500, 0002, 15.4
1000, 0001, 25.6

So the most efficent way to send my 1000 beans was in 40 separate messages
each containing 25 beans, each of about 20kb in size.

Does anyone know an explanation for this? It seems to me that there must be
something in axis which has been very poorly implemented to cause this
blowout in performace.






Reply via email to