[
https://issues.apache.org/jira/browse/HDFS-9083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14974953#comment-14974953
]
Hadoop QA commented on HDFS-9083:
---------------------------------
\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch | 0m 0s | The patch command could not apply
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL |
http://issues.apache.org/jira/secure/attachment/12768797/HDFS-9083-branch-2.7.patch
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | branch-2 / baa2998 |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/13198/console |
This message was automatically generated.
> Replication violates block placement policy.
> --------------------------------------------
>
> Key: HDFS-9083
> URL: https://issues.apache.org/jira/browse/HDFS-9083
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: HDFS, namenode
> Affects Versions: 2.6.0
> Reporter: Rushabh S Shah
> Assignee: Rushabh S Shah
> Priority: Blocker
> Attachments: HDFS-9083-branch-2.7.patch
>
>
> Recently we are noticing many cases in which all the replica of the block are
> residing on the same rack.
> During the block creation, the block placement policy was honored.
> But after node failure event in some specific manner, the block ends up in
> such state.
> On investigating more I found out that BlockManager#blockHasEnoughRacks is
> dependent on the config (net.topology.script.file.name)
> {noformat}
> if (!this.shouldCheckForEnoughRacks) {
> return true;
> }
> {noformat}
> We specify DNSToSwitchMapping implementation (our own custom implementation)
> via net.topology.node.switch.mapping.impl and no longer use
> net.topology.script.file.name config.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)