Martin,

I noticed this email a while ago and wanted to look into it. I see by your recent email that you're now getting away from using Axis, but thought it might be of interest to other people on the list anyway.

Assuming you were using separate client and server systems for this test, I suspect you'd see a similar curve using any SOAP implementation. If you consider how this works, when you send all the data as a single message you have a completely linear process - the request is generated as text on the client, then sent to the server, then converted back into objects on the server, and finally processed by your server code. The response then goes through the same series of steps getting back to the client. When you break your data up into several requests you allow several of these steps to be executed in parallel. In particular, your client can be working on one request while an earlier request is being transmitted to the server, the server is working on an earlier request or response, and a still earlier response is being transmitted back to the client.

If you ran your tests with client and server on the same system I wouldn't expect to see the kind of results you found. Let me know if this is the case, perhaps there are some unusual aspects to your data that account for the differences.

Seeing this did make me curious about Axis performance, though. In my own tests (running client and server on a single system) I found Axis performance went up at first as I increased the message size, then basically leveled off. Here's what my raw results look like:

Message size Roundtrip Time (ms.)
10KB 107
20KB 162
40KB 289
80KB 491
160KB 981
320KB 2000

Message sizes are the actual character count for the request and response, times are the average over 11 requests and responses, excluding the first request and response (to avoid bringing in class loading overhead and such - this is basically a constant added to all the times). These figures are from Sun JRE 1.3.1 on Linux, running on a PIIIm with 256MB RAM. I used "-Xmx64M -Xms64M" options on the Java command line to avoid a lot of threshing as the heap grew; running with the default settings will add more overhead to the handling time of larger messages initially, until the JVM gets enough memory to run efficiently.

My data consists of an object graph with variable numbers of objects. There are a lot of links between objects, so this might not be typical of what you'd see working with flatter data structures. My actual service processing just reverses the order of elements in arrays, so it doesn't contribute anything significant to the overall time.

- Dennis

Dennis M. Sosnoski
Enterprise Java, XML, and Web Services Support
http://www.sosnoski.com

Martin Jericho wrote:

I was doing some benchmarking to test how much it would impact on
performance to break up a single, large request into several smaller ones.
I was expecting of course that for a fixed volume of data, dividing it into
more separate messages would increase the overheads and make things slower.

What I found was quite surprising. It seems that once a message gets bigger
than 20kb, the response time increases at a rate much greater than the
linear relationship one would expect. I did some tests with a bean
containing an array of other beans. The size of the message with no array
elements is 1571 bytes, and each array element is 772 bytes.

The times recorded are from calling the service method on the client until
receiving the response back from the server (the response is just a string).

The results were as follows:

Number of calls, Number of Array Items, Total Response Time in Seconds
0001, 1000, 20.7
0002, 0500, 13.6
0004, 0250, 9.9
0005, 0200, 9.7
0010, 0100, 7.6
0020, 0050, 7.2
0040, 0025, 6.8
0050, 0020, 6.9
0100, 0010, 7.3
0200, 0005, 9.4
0250, 0004, 10.5
0500, 0002, 15.4
1000, 0001, 25.6

So the most efficent way to send my 1000 beans was in 40 separate messages
each containing 25 beans, each of about 20kb in size.

Does anyone know an explanation for this? It seems to me that there must be
something in axis which has been very poorly implemented to cause this
blowout in performace.





Reply via email to