[
https://issues.apache.org/jira/browse/HADOOP-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12656115#action_12656115
]
Konstantin Shvachko commented on HADOOP-4797:
---------------------------------------------
The patch looks good. Although I did not go into direct buffers implementation
details.
My only concern is how do we test that
# it prevents memory leaks;
# it does note degrade the performance.
Performance-wise we can just do a bunch of {{ls}}-s for one large directory,
measure average rpc time before and after the patch, and post numbers in here.
For leaking I don't have any idea how we can do it other than simple monitoring
memory consumption using top. Any ideas?
> RPC Server can leave a lot of direct buffers
> ---------------------------------------------
>
> Key: HADOOP-4797
> URL: https://issues.apache.org/jira/browse/HADOOP-4797
> Project: Hadoop Core
> Issue Type: Bug
> Components: ipc
> Affects Versions: 0.17.0
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Priority: Blocker
> Fix For: 0.18.3, 0.19.1, 0.20.0
>
> Attachments: HADOOP-4797-branch-18.patch,
> HADOOP-4797-branch-18.patch, HADOOP-4797-branch-18.patch, HADOOP-4797.patch
>
>
> RPC server unwittingly can soft-leak direct buffers. One observed case is
> that one of the namenodes at Yahoo took 40GB of virtual memory though it was
> configured for 24GB memory. Most of the memory outside Java heap expected to
> be direct buffers. This shown to be because of how RPC server reads and
> writes serialized data. The cause and proposed fix are in following comment.
>
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.