[ 
https://issues.apache.org/jira/browse/HDFS-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14187957#comment-14187957
 ] 

Hadoop QA commented on HDFS-7300:
---------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677770/HDFS-7300.v2.patch
  against trunk revision 3f48493.

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

      {color:red}-1 javac{color}.  The applied patch generated 1267 javac 
compiler warnings (more than the trunk's current 1265 warnings).

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

                  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
                  
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8577//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8577//artifact/patchprocess/diffJavacWarnings.txt
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8577//console

This message is automatically generated.

> The getMaxNodesPerRack() method in BlockPlacementPolicyDefault is flawed
> ------------------------------------------------------------------------
>
>                 Key: HDFS-7300
>                 URL: https://issues.apache.org/jira/browse/HDFS-7300
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Kihwal Lee
>            Assignee: Kihwal Lee
>            Priority: Critical
>         Attachments: HDFS-7300.patch, HDFS-7300.v2.patch
>
>
> The {{getMaxNodesPerRack()}} can produce an undesirable result in some cases.
> - Three replicas on two racks. The max is 3, so everything can go to one rack.
> - Two replicas on two or more racks. The max is 2, both replicas can end up 
> in the same rack.
> {{BlockManager#isNeededReplication()}} fixes this after block/file is closed 
> because {{blockHasEnoughRacks()}} will return fail.  This is not only extra 
> work, but also can break the favored nodes feature.
> When there are two racks and two favored nodes are specified in the same 
> rack, NN may allocate the third replica on a node in the same rack, because 
> {{maxNodesPerRack}} is 3. When closing the file, NN moves a block to the 
> other rack. There is 66% chance that a favored node is moved.  If 
> {{maxNodesPerRack}} was 2, this would not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to