[ https://issues.apache.org/jira/browse/HADOOP-4802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12655106#action_12655106 ]
Doug Cutting commented on HADOOP-4802: -------------------------------------- > Do the above add up to a -1 on v4 of the patch? No, more like a +0. Without benchmarking its safest to not change things much. Do we have a good pure RPC benchmark? > RPC Server send buffer retains size of largest response ever sent > ------------------------------------------------------------------ > > Key: HADOOP-4802 > URL: https://issues.apache.org/jira/browse/HADOOP-4802 > Project: Hadoop Core > Issue Type: Bug > Components: ipc > Affects Versions: 0.18.2, 0.19.0 > Reporter: stack > Attachments: 4802-v2.patch, 4802-v3.patch, 4802-v4-TRUNK.patch, > 4802.patch > > > The stack-based ByteArrayOutputStream in Server.Hander is reset each time > through the run loop. This will set the BAOS 'size' back to zero but the > allocated backing buffer is unaltered. If during an Handlers' lifecycle, any > particular RPC response was fat -- Megabytes, even -- the buffer expands > during the write to accommodate the particular response but then never > shrinks subsequently. If a hosting Server has had more than one 'fat > payload' occurrence, the resultant occupied heap can provoke memory woes (See > https://issues.apache.org/jira/browse/HBASE-900?focusedCommentId=12654009#action_12654009 > for an extreme example; occasional payloads of 20-50MB with 30 handlers > robbed the heap of 700MB). -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.