[
https://issues.apache.org/jira/browse/HDFS-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14969678#comment-14969678
]
Hadoop QA commented on HDFS-4861:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch | 0m 0s | The patch command could not apply
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12638791/HDFS-4861-v2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / aea26bf |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/13132/console |
This message was automatically generated.
> BlockPlacementPolicyDefault does not consider decommissioning racks
> -------------------------------------------------------------------
>
> Key: HDFS-4861
> URL: https://issues.apache.org/jira/browse/HDFS-4861
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 0.23.7, 2.1.0-beta
> Reporter: Kihwal Lee
> Assignee: Rushabh S Shah
> Labels: BB2015-05-TBR
> Attachments: HDFS-4861-v2.patch, HDFS-4861.patch
>
>
> getMaxNodesPerRack() calculates the max replicas/rack like this:
> {code}
> int maxNodesPerRack = (totalNumOfReplicas-1)/clusterMap.getNumOfRacks()+2;
> {code}
> Since this does not consider the racks that are being decommissioned and the
> decommissioning state is only checked later in isGoodTarget(), certain blocks
> are not replicated even when there are many racks and nodes.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)