[
https://issues.apache.org/jira/browse/HDFS-4940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13695846#comment-13695846
]
Colin Patrick McCabe commented on HDFS-4940:
--------------------------------------------
Have you checked out the heap dump? I looked at it in Eclipse Memory Analyzer
and that's how I found the giant buffer inside
{{org.apache.hadoop.ipc.Server$Connection}}. Unless the heap dump is wrong, it
really does look like this is where the memory is going.
I'm still not sure how the test is causing this problem-- hopefully the patch
from HADOOP-9676 will make this more reproducible (apparently it formerly
didn't reproduce all the time)
> namenode OOMs under Bigtop's TestCLI
> ------------------------------------
>
> Key: HDFS-4940
> URL: https://issues.apache.org/jira/browse/HDFS-4940
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.1.0-beta
> Reporter: Roman Shaposhnik
> Priority: Blocker
> Fix For: 2.1.0-beta
>
>
> Bigtop's TestCLI when executed against Hadoop 2.1.0 seems to make it OOM
> quite reliably regardless of the heap size settings. I'm attaching a heap
> dump URL. Alliteratively anybody can just take Bigtop's tests, compiled them
> against Hadoop 2.1.0 bits and try to reproduce it.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira