We are currently using 1.2 as an interface to a service that returns a
file (as base64) and metadata about that file. For the most part, the
files are small(50k), but at times they balloon to 60 - 70 megs. Our
little test client was getting 'out of memory' exceptions with 1.5
gigs(!) allocated to the heap. Our goal is to reduce the memory spikes
as much as possible without impacting performance (much). 

My question: are we doing really wacky stuff so we should subclass our
"fixes" and keep it to ourselves, or are there others out there that
would are using xml-rpc to move big (really big) base64 chunks and would
benefit from our work?



-- 
Chris Jackson <[EMAIL PROTECTED]>
Project Leader
GHS Data Management
http://www.ghsinc.com
ph. 207-622-7153

CONFIDENTIALITY NOTICE:
This message is intended only for the use of the individual or entity to
which it is addressed, and may contain information that is privileged,
confidential and exempt from disclosure under applicable law.
If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to