[
https://issues.apache.org/jira/browse/HADOOP-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12656211#action_12656211
]
Raghu Angadi commented on HADOOP-4797:
--------------------------------------
In one of the Koji's experiments on branch 0.18:
{quote}
[...] I should run longer to see if there's any trend.
The most ciritcal difference is, after the list attack, namenode's vm memory
was
1) WITHOUT Raghu's patch total kB 20895184 (20G)
2) WITH Raghu's patch total kB 15211256 (15G)
with heap limit of 14G. :) [...]
{quote}
> RPC Server can leave a lot of direct buffers
> ---------------------------------------------
>
> Key: HADOOP-4797
> URL: https://issues.apache.org/jira/browse/HADOOP-4797
> Project: Hadoop Core
> Issue Type: Bug
> Components: ipc
> Affects Versions: 0.17.0
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Priority: Blocker
> Fix For: 0.18.3, 0.19.1, 0.20.0
>
> Attachments: HADOOP-4797-branch-18.patch,
> HADOOP-4797-branch-18.patch, HADOOP-4797-branch-18.patch, HADOOP-4797.patch,
> HADOOP-4797.patch
>
>
> RPC server unwittingly can soft-leak direct buffers. One observed case is
> that one of the namenodes at Yahoo took 40GB of virtual memory though it was
> configured for 24GB memory. Most of the memory outside Java heap expected to
> be direct buffers. This shown to be because of how RPC server reads and
> writes serialized data. The cause and proposed fix are in following comment.
>
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.