[
https://issues.apache.org/jira/browse/HADOOP-2907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12573551#action_12573551
]
rangadi edited comment on HADOOP-2907 at 2/28/08 5:38 PM:
---------------------------------------------------------------
> (my impression is that that release still wrote to local disk)
0.16. does not write to local disk. Local Disk write was remove quite some time
back. But it had removed some buffers on DataNode. HADOOP-2768 went into svn
revision 618349 and you are running 618351.
Edit: HADOOP-2768 brought buffering back to how it was before the local disk
was removed. But this buffer has bigger penalty for slow writes.
was (Author: rangadi):
> (my impression is that that release still wrote to local disk)
0.16. does not write to local disk. Local Disk write was remove quite some time
back. But it had removed some buffers on DataNode. HADOOP-2768 went into svn
revision 618349 and you are running 618351.
> dead datanodes because of OutOfMemoryError
> ------------------------------------------
>
> Key: HADOOP-2907
> URL: https://issues.apache.org/jira/browse/HADOOP-2907
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Reporter: Christian Kunz
>
> We see more dead datanodes than in previous releases. The common exception is
> found in the out file:
> Exception in thread "[EMAIL PROTECTED]" java.lang.OutOfMemoryError: Java heap
> space
> Exception in thread "DataNode: [dfs.data.dir-value]"
> java.lang.OutOfMemoryError: Java heap space
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.