[
https://issues.apache.org/jira/browse/HDFS-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13032162#comment-13032162
]
Todd Lipcon commented on HDFS-1332:
-----------------------------------
I'm referring to this code in FSNamesystem:
{code}
DatanodeDescriptor targets[] = blockManager.replicator.chooseTarget(
src, replication, clientNode, excludedNodes, blockSize);
if (targets.length < blockManager.minReplication) {
throw new IOException("File " + src + " could only be replicated to " +
targets.length + " nodes, instead of " +
blockManager.minReplication);
}
{code}
> When unable to place replicas, BlockPlacementPolicy should log reasons nodes
> were excluded
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-1332
> URL: https://issues.apache.org/jira/browse/HDFS-1332
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: name-node
> Reporter: Todd Lipcon
> Assignee: Ted Yu
> Priority: Minor
> Labels: newbie
> Attachments: HDFS-1332.patch
>
>
> Whenever the block placement policy determines that a node is not a "good
> target" it could add the reason for exclusion to a list, and then when we log
> "Not able to place enough replicas" we could say why each node was refused.
> This would help new users who are having issues on pseudo-distributed (eg
> because their data dir is on /tmp and /tmp is full). Right now it's very
> difficult to figure out the issue.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira