[
https://issues.apache.org/jira/browse/HADOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12573548#action_12573548
]
ckunz edited comment on HADOOP-2907 at 2/28/08 5:29 PM:
-----------------------------------------------------------------
I am not completely convinced that the change in writing explains the problem,
because:
1) since the time the reduce phase of the job started, we did not get any more
dead datanodes (but they happened more frequently during the map phase)
2) we saw dead datanodes with nightly build #810 (my impression is that that
release still wrote to local disk)
But I would be happy to restart a couple of datanodes with a new patch.
was (Author: ckunz):
I am not completely convinced that the change in writing explains the
problem, because:
1) since the rtime the educe phase of the job started, we did not get any more
dead datanodes (but they happened more frequently during the map phase)
2) we saw dead datanodes with nightly build #810 (my impression is that that
release still wrote to local disk)
But I would be happy to restart a couple of datanodes with a new patch.
> dead datanodes because of OutOfMemoryError
> ------------------------------------------
>
> Key: HADOOP-2907
> URL: https://issues.apache.org/jira/browse/HADOOP-2907
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Christian Kunz
>
> We see more dead datanodes than in previous releases. The common exception is
> found in the out file:
> Exception in thread "[EMAIL PROTECTED]" java.lang.OutOfMemoryError: Java heap
> space
> Exception in thread "DataNode: [dfs.data.dir-value]"
> java.lang.OutOfMemoryError: Java heap space
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.