[
https://issues.apache.org/jira/browse/HDFS-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034430#comment-13034430
]
Ted Yu commented on HDFS-1332:
------------------------------
Running TestDecommission, I found this in
TEST-org.apache.hadoop.hdfs.TestDecommission.txt when DEBUG logging is enabled
for FSNamesystem:
{code}
2011-05-16 16:37:04,230 WARN namenode.FSNamesystem
(BlockPlacementPolicyDefault.java:chooseTarget(212)) - Not able to place enough
replicas, still in need of 1 to reach 2
Not able to place enough replicas.[127.0.0.1:49864: Node
/default-rack/127.0.0.1:49864 is not chosen because the node is (being)
decommissioned ]
{code}
> When unable to place replicas, BlockPlacementPolicy should log reasons nodes
> were excluded
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-1332
> URL: https://issues.apache.org/jira/browse/HDFS-1332
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: name-node
> Reporter: Todd Lipcon
> Assignee: Ted Yu
> Priority: Minor
> Labels: newbie
> Fix For: 0.23.0
>
> Attachments: HDFS-1332-concise.patch
>
>
> Whenever the block placement policy determines that a node is not a "good
> target" it could add the reason for exclusion to a list, and then when we log
> "Not able to place enough replicas" we could say why each node was refused.
> This would help new users who are having issues on pseudo-distributed (eg
> because their data dir is on /tmp and /tmp is full). Right now it's very
> difficult to figure out the issue.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira