Again, thanks for your attention Venkat. It is greatly appreciated. I'll patch my version of axis and try to reproduce and measure your improvements.
On Jan 21, 2005, at 7:34 AM, Venkat Reddy (JIRA) wrote:
deser.parse() consumes only about 15 times the size the soap message it is dealing with, whether multi-ref is true or not. However the soap message gets about 3 times larger with multi-ref on. So i think what we are asking for is optimizing the soap message size when multi-ref is on.
15 is the ratio when we don't run GC before calling dser.parse(). It falls to about 10 if we run GC there. Is the ratio of 15 not acceptable?
I'd think of it this way:
ANY multiple larger than 2 of the message size which scales linearly with message size indicates a performance optimization problem. It's the linear scaling at issue here. I understand that's a basic architectural issue with axis that you're probably not going to be able to fix, and so the discussion is now focusing more on what can be done (relatively) easily than on what will actually fix the underlying problem.
Fixing the underlying problem, I think, would result in a de/ser mechanism with a fixed overhead for a given wsdl which could process messages of arbitrary size. Streaming XML parsers can do this. I understand you won't be able to make that happen in the current branch of axis, so I'm happy with any improvements you can offer.
It would be nice though if the axis dev team could look at this issue in the process of planning the major version of axis and give some thought to how the architecture might be changed to better support this type of use for axis.
In the end, if you manage to provide a total memory use ratio of 15 to 1, that's probably about one half to one quarter the memory use I was seeing, providing that we're talking about 1) memory used compared to object graph size (not xml message size), and 2) total memory size, and not some portion of it, ie., if deser.parse() consumes 15 to 1, and there's some other piece of the deser mechanism that consumes 10 to 1, we're at 25 to 1. Which is still an improvement to be sure.
-- Peter Molettiere Senior Engineer Truereq, Inc. http://www.truereq.com/
