[
https://issues.apache.org/jira/browse/HDFS-2964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14381454#comment-14381454
]
Harsh J commented on HDFS-2964:
-------------------------------
On second thought, this should probably just be logged once given the number of
calls made to that function regularly.
> No notice in any logs if dfs.datanode.du.reserved is greater than available
> disk space
> --------------------------------------------------------------------------------------
>
> Key: HDFS-2964
> URL: https://issues.apache.org/jira/browse/HDFS-2964
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 2.6.0
> Reporter: Robert J Berger
> Priority: Minor
> Attachments: HDFS-2964-1.patch, HDFS-2964-2.patch, HDFS-2964-3.patch,
> HDFS-2964.patch
>
>
> We spent a long time tracking down why a test hdfs cluster seemed to be
> running fine, but would not allow the mapred system to come up complaining
> that "could only be replicated to 0 nodes, instead of 1".
> There were no namenode or datanode errors in any of the logs. hadoop fsck
> said everything was good. At first glance dfsadmin -report looked good. It
> wasn't until I realized that there was 0 Capacity available that we poked
> around and found
> https://groups.google.com/a/cloudera.org/group/scm-users/msg/a4252d6623adbc2d
> which mentioned that the "reserverd space" might be greater than the disk
> space available. And we did find that our dfs.datanode.du.reserved was indeed
> higher than our actual since we were only testing a small cluster.
> It seems that there should be some warning or error in the logs that say that.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)