[
https://issues.apache.org/jira/browse/HDFS-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13033455#comment-13033455
]
Ted Yu commented on HDFS-1332:
------------------------------
@Hairong:
I understand BlockPlacementPolicyDefault is on critical path. So no new HashMap
is constructed in either of the two current versions in this ticket.
All additional strings depend on FSNamesystem.LOG.isDebugEnabled() returning
true.
Thanks
> When unable to place replicas, BlockPlacementPolicy should log reasons nodes
> were excluded
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-1332
> URL: https://issues.apache.org/jira/browse/HDFS-1332
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: name-node
> Reporter: Todd Lipcon
> Assignee: Ted Yu
> Priority: Minor
> Labels: newbie
> Fix For: 0.23.0
>
> Attachments: HDFS-1332-concise.patch, HDFS-1332.patch
>
>
> Whenever the block placement policy determines that a node is not a "good
> target" it could add the reason for exclusion to a list, and then when we log
> "Not able to place enough replicas" we could say why each node was refused.
> This would help new users who are having issues on pseudo-distributed (eg
> because their data dir is on /tmp and /tmp is full). Right now it's very
> difficult to figure out the issue.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira