[
https://issues.apache.org/jira/browse/HDFS-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13032224#comment-13032224
]
Ted Yu commented on HDFS-1332:
------------------------------
When debugging testDecommissionForReasonableExceptionMsg(), I saw the following
scenario.
In BlockPlacementPolicyDefault.chooseRandom(int, String, HashMap<Node,Node>,
long, int, List<DatanodeDescriptor>), numOfReplicas was 1 and
numOfAvailableNodes was 0. So the while loop in chooseRandom() at line 387
wasn't executed.
In this case nodeDescToReasonMap was null but there was no datanode being
considered good target.
The current patch removed the term 'Detail' in the message for
NotEnoughReplicasException.
> When unable to place replicas, BlockPlacementPolicy should log reasons nodes
> were excluded
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-1332
> URL: https://issues.apache.org/jira/browse/HDFS-1332
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: name-node
> Reporter: Todd Lipcon
> Assignee: Ted Yu
> Priority: Minor
> Labels: newbie
> Fix For: 0.23.0
>
> Attachments: HDFS-1332.patch
>
>
> Whenever the block placement policy determines that a node is not a "good
> target" it could add the reason for exclusion to a list, and then when we log
> "Not able to place enough replicas" we could say why each node was refused.
> This would help new users who are having issues on pseudo-distributed (eg
> because their data dir is on /tmp and /tmp is full). Right now it's very
> difficult to figure out the issue.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira