[ 
https://issues.apache.org/jira/browse/HADOOP-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12654990#action_12654990
 ] 

stack commented on HADOOP-4802:
-------------------------------

> I'm still not convinced we should do more than replace the buf.reset() with 
> buf = new ByteArrayOutputStream() and remove the initialization of buf 
> altogether.

Above is predicated on our running a 'benchmark'.  What would you suggest I run?

Why create new objects when it can be avoided (when response < 10k)?



> RPC Server send buffer retains size of largest response ever sent 
> ------------------------------------------------------------------
>
>                 Key: HADOOP-4802
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4802
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 0.18.2, 0.19.0
>            Reporter: stack
>         Attachments: 4802-v2.patch, 4802.patch
>
>
> The stack-based ByteArrayOutputStream in Server.Hander is reset each time 
> through the run loop.  This will set the BAOS 'size' back to zero but the 
> allocated backing buffer is unaltered.  If during an Handlers' lifecycle, any 
> particular RPC response was fat -- Megabytes, even -- the buffer expands 
> during the write to accommodate the particular response but then never 
> shrinks subsequently.  If a hosting Server has had more than one 'fat 
> payload' occurrence, the resultant occupied heap can provoke memory woes (See 
> https://issues.apache.org/jira/browse/HBASE-900?focusedCommentId=12654009#action_12654009
>  for an extreme example; occasional payloads of 20-50MB with 30 handlers 
> robbed the heap of 700MB).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to