Yes, this  is very unefficient for large resonses. The reason for doing 
it this way is that the XML-RPC says: "The Content-Length must be 
specified and must be correct." So we need the whole response to be 
packed before we can start sending headers.

Now I don't think this is really needed for the XML-RPC clients out 
there. Most that I'm aware of use a full HTTP client and should do fine 
without (as long as HTTP keep-alive is disabled, because with keep-alive 
the client needs Content-Length to know when it has read the whole 
response). If this is the case, I think it would be a good idea to offer 
methods for an optional stream-based interface to XmlRpcServer.

Hannes


Daniel Rall wrote:

>When the XmlRpcServer class responds to requests, all the data in the
>response is first buffered in memory in whatever form is reasonable
>for output (such as a Hashtable), then written to a StringBuffer, then
>converted to a String, then a converted to byte[].  That means the
>same data is buffered in memory (in various forms) a minimum of 4
>times (!!).  If the response contains large quantities of data,
>imagine the repurcussions...
>
>The current performance for large data sets is just not sufficient.  I
>would like to make the API more "streamy".  Velocity's template
>processing API
><http://jakarta.apache.org/velocity/api/org/apache/velocity/app/Velocity.html#mergeTemplate(java.lang.String,%20java.lang.String,%20org.apache.velocity.context.Context,%20java.io.Writer)>
>is a good example of how I would like the XML-RPC server to act.
>
>Thoughts, comments?
>
>                             Thanks, Dan
>




Reply via email to