I was doing some benchmarking to test how much it would impact on performance to break up a single, large request into several smaller ones. I was expecting of course that for a fixed volume of data, dividing it into more separate messages would increase the overheads and make things slower.
What I found was quite surprising. It seems that once a message gets bigger than 20kb, the response time increases at a rate much greater than the linear relationship one would expect. I did some tests with a bean containing an array of other beans. The size of the message with no array elements is 1571 bytes, and each array element is 772 bytes. The times recorded are from calling the service method on the client until receiving the response back from the server (the response is just a string). The results were as follows: Number of calls, Number of Array Items, Total Response Time in Seconds 0001, 1000, 20.7 0002, 0500, 13.6 0004, 0250, 9.9 0005, 0200, 9.7 0010, 0100, 7.6 0020, 0050, 7.2 0040, 0025, 6.8 0050, 0020, 6.9 0100, 0010, 7.3 0200, 0005, 9.4 0250, 0004, 10.5 0500, 0002, 15.4 1000, 0001, 25.6 So the most efficent way to send my 1000 beans was in 40 separate messages each containing 25 beans, each of about 20kb in size. Does anyone know an explanation for this? It seems to me that there must be something in axis which has been very poorly implemented to cause this blowout in performace.
