[ 
https://issues.apache.org/jira/browse/HADOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12573556#action_12573556
 ] 

Raghu Angadi commented on HADOOP-2907:
--------------------------------------


What % of datanodes do you think logged OutOfMemory exception even once? If avg 
load at any time was able to cause this problem then we would see a large 
portion of datanodes to have this exception in their logs. I grepped on a few 
random datanodes and I could not seen any in last few days. Simon shows number 
of active reads and writes. We could check datanodes that have high numbers 
there.

> dead datanodes because of OutOfMemoryError
> ------------------------------------------
>
>                 Key: HADOOP-2907
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2907
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.16.0
>            Reporter: Christian Kunz
>
> We see more dead datanodes than in previous releases. The common exception is 
> found in the out file:
> Exception in thread "[EMAIL PROTECTED]" java.lang.OutOfMemoryError: Java heap 
> space
> Exception in thread "DataNode: [dfs.data.dir-value]" 
> java.lang.OutOfMemoryError: Java heap space

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to