Darrel Schneider created GEODE-6705:
---------------------------------------

             Summary: client/server messaging can allocate a large amount of 
memory when serializing small parts
                 Key: GEODE-6705
                 URL: https://issues.apache.org/jira/browse/GEODE-6705
             Project: Geode
          Issue Type: Improvement
          Components: client/server
            Reporter: Darrel Schneider


As each part of a client/server message is added to the part array on Message, 
if it is an object that needs to be serialized it creates a 
HeapDataOutputStream and tells it to have an initial heap ByteBuffer of size 
"chunkSize" which defaults to 1024. This buffer ends up hanging around in the 
part list until the Message is actually sent. The reason for this was to 
prevent do extra copying of the serialized data. The HeapDataOutputStream could 
have been asked to convert its data to a byte array that is just big enough for 
the serialized data but this would have copied the data. But leaving it in the 
original buffer causes that buffer to live longer risking its promotion by the 
garbage collector.
I see server's trying to send back to the client a pr put reply that should be 
pretty small since it does not include the old value. But it does have a 
version tag which causes a 1k ByteBuffer to be allocated in the server for 
every put reply. We may be able to make better estimates of the initial buffer 
size. For example the put reply code knows it is serializing a version tag and 
if it has not done that before it could remember the size of the first version 
tag and use that as the initial size in the future.
We should also consider reusing the heap ByteBuffers. For example in a cache 
server each ServerConnection thread could have a thread local cache of 
ByteBuffers that it can reuse.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to