[
https://issues.apache.org/jira/browse/HADOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12573496#action_12573496
]
Raghu Angadi commented on HADOOP-2907:
--------------------------------------
Christian,
What is the read and write pattern like? How fast do clients read? For e.g. on
one of the normal datanodes, there are around 500 connections to DataNode and
it is pretty much idle. These connections need around 200MB active memory. So
another DataNode only needs a few times more connections to run out of memory.
> dead datanodes because of OutOfMemoryError
> ------------------------------------------
>
> Key: HADOOP-2907
> URL: https://issues.apache.org/jira/browse/HADOOP-2907
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Christian Kunz
>
> We see more dead datanodes than in previous releases. The common exception is
> found in the out file:
> Exception in thread "[EMAIL PROTECTED]" java.lang.OutOfMemoryError: Java heap
> space
> Exception in thread "DataNode: [dfs.data.dir-value]"
> java.lang.OutOfMemoryError: Java heap space
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.