[
https://issues.apache.org/jira/browse/HDFS-1332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13032536#comment-13032536
]
Ted Yu commented on HDFS-1332:
------------------------------
I created the HashMap because there could be multiple datanodes that were not
good target.
When I tried to access
https://builds.apache.org/hudson/job/PreCommit-HDFS-Build/496/, I saw it seemed
to be stuck. I couldn't see the exact cause for individual test failure.
I ran all the newly reported failed tests in Eclipse:
org.apache.hadoop.hdfs.server.namenode.TestNodeCount, TestHDFSTrash along with
TestFileConcurrentReader and TestDFSStorageStateRecovery I mentioned yesterday.
I have renamed the reason variable in my next patch.
Thanks for the review.
> When unable to place replicas, BlockPlacementPolicy should log reasons nodes
> were excluded
> ------------------------------------------------------------------------------------------
>
> Key: HDFS-1332
> URL: https://issues.apache.org/jira/browse/HDFS-1332
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: name-node
> Reporter: Todd Lipcon
> Assignee: Ted Yu
> Priority: Minor
> Labels: newbie
> Fix For: 0.23.0
>
> Attachments: HDFS-1332.patch
>
>
> Whenever the block placement policy determines that a node is not a "good
> target" it could add the reason for exclusion to a list, and then when we log
> "Not able to place enough replicas" we could say why each node was refused.
> This would help new users who are having issues on pseudo-distributed (eg
> because their data dir is on /tmp and /tmp is full). Right now it's very
> difficult to figure out the issue.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira