[ 
https://issues.apache.org/jira/browse/HDFS-16456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

caozhiqiang updated HDFS-16456:
-------------------------------
    Description: 
In below scenario, decommission will fail by TOO_MANY_NODES_ON_RACK reason:
 # Enable EC policy, such as RS-6-3-1024k.
 # The rack number in this cluster is equal with the replication number(9)
 # A rack only has one DN, and decommission this DN.

The root cause is in 
BlockPlacementPolicyRackFaultTolerant::getMaxNodesPerRack() function, it will 
give a limit parameter maxNodesPerRack for choose targets. In this scenario, 
the maxNodesPerRack is 1, which means each rack can only be chosen one datanode.

 
{code:java}
  protected int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {
    int clusterSize = clusterMap.getNumOfLeaves();
    int totalNumOfReplicas = numOfChosen + numOfReplicas;
    if (totalNumOfReplicas > clusterSize) {
      numOfReplicas -= (totalNumOfReplicas-clusterSize);
      totalNumOfReplicas = clusterSize;
    }
    // No calculation needed when there is only one rack or picking one node.
    int numOfRacks = clusterMap.getNumOfRacks();
    // HDFS-14527 return default when numOfRacks = 0 to avoid
    // ArithmeticException when calc maxNodesPerRack at following logic.
    if (numOfRacks <= 1 || totalNumOfReplicas <= 1) {
      return new int[] {numOfReplicas, totalNumOfReplicas};
    }
    // If more racks than replicas, put one replica per rack.
    if (totalNumOfReplicas < numOfRacks) {
      return new int[] {numOfReplicas, 1};
    }
    // If more replicas than racks, evenly spread the replicas.
    // This calculation rounds up.
    int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
    return new int[] {numOfReplicas, maxNodesPerRack};
  } {code}
int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
here will be called, where totalNumOfReplicas=9 and  numOfRacks=9  

 

When we decommission one dn which is only one node in its rack, the 
chooseOnce() in BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder() 
will throw NotEnoughReplicasException, but the exception will not be caught and 
fail to fallback to chooseEvenlyFromRemainingRacks() function.

When decommission, after choose targets, verifyBlockPlacement() function will 
return the total rack number contains the invalid rack, and 
BlockPlacementStatusDefault::isPlacementPolicySatisfied() will return false and 
it will also cause decommission fail.
{code:java}
  public boolean isPlacementPolicySatisfied() {
    return requiredRacks <= currentRacks || currentRacks >= totalRacks;
  }
 {code}

  was:
In below scenario, decommission will fail by TOO_MANY_NODES_ON_RACK reason:
 # Enable EC policy, such as RS-6-3-1024k.
 # The rack number in this cluster is equal with the replication number(9)
 # A rack only has one DN, and decommission this DN.

The root cause is in 
BlockPlacementPolicyRackFaultTolerant::getMaxNodesPerRack() function, it will 
give a limit parameter maxNodesPerRack for choose targets. In this scenario, 
the maxNodesPerRack is 1, which means each rack can only be chosen one datanode.

 
{code:java}
  protected int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {
    int clusterSize = clusterMap.getNumOfLeaves();
    int totalNumOfReplicas = numOfChosen + numOfReplicas;
    if (totalNumOfReplicas > clusterSize) {
      numOfReplicas -= (totalNumOfReplicas-clusterSize);
      totalNumOfReplicas = clusterSize;
    }
    // No calculation needed when there is only one rack or picking one node.
    int numOfRacks = clusterMap.getNumOfRacks();
    // HDFS-14527 return default when numOfRacks = 0 to avoid
    // ArithmeticException when calc maxNodesPerRack at following logic.
    if (numOfRacks <= 1 || totalNumOfReplicas <= 1) {
      return new int[] {numOfReplicas, totalNumOfReplicas};
    }
    // If more racks than replicas, put one replica per rack.
    if (totalNumOfReplicas < numOfRacks) {
      return new int[] {numOfReplicas, 1};
    }
    // If more replicas than racks, evenly spread the replicas.
    // This calculation rounds up.
    int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
    return new int[] {numOfReplicas, maxNodesPerRack};
  } {code}
int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
here will be called, where totalNumOfReplicas=9 and  numOfRacks=9  

 

When we decommission one dn which is only one node in its rack, the 
chooseOnce() in BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder() 
will throw NotEnoughReplicasException, but the exception will not be caught and 
fail to fallback to chooseEvenlyFromRemainingRacks() function.

When decommission, after choose targets, verifyBlockPlacement() function will 
return the total rack number contains the invalid rack, and 
BlockPlacementStatusDefault::isPlacementPolicySatisfied() will return false and 
it will also cause decommission fail.


> EC: Decommission a rack with only on dn will fail when the rack number is 
> equal with replication
> ------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-16456
>                 URL: https://issues.apache.org/jira/browse/HDFS-16456
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: ec, namenode
>    Affects Versions: 3.1.1, 3.4.0
>            Reporter: caozhiqiang
>            Priority: Major
>         Attachments: HDFS-16456.001.patch
>
>
> In below scenario, decommission will fail by TOO_MANY_NODES_ON_RACK reason:
>  # Enable EC policy, such as RS-6-3-1024k.
>  # The rack number in this cluster is equal with the replication number(9)
>  # A rack only has one DN, and decommission this DN.
> The root cause is in 
> BlockPlacementPolicyRackFaultTolerant::getMaxNodesPerRack() function, it will 
> give a limit parameter maxNodesPerRack for choose targets. In this scenario, 
> the maxNodesPerRack is 1, which means each rack can only be chosen one 
> datanode.
>  
> {code:java}
>   protected int[] getMaxNodesPerRack(int numOfChosen, int numOfReplicas) {
>     int clusterSize = clusterMap.getNumOfLeaves();
>     int totalNumOfReplicas = numOfChosen + numOfReplicas;
>     if (totalNumOfReplicas > clusterSize) {
>       numOfReplicas -= (totalNumOfReplicas-clusterSize);
>       totalNumOfReplicas = clusterSize;
>     }
>     // No calculation needed when there is only one rack or picking one node.
>     int numOfRacks = clusterMap.getNumOfRacks();
>     // HDFS-14527 return default when numOfRacks = 0 to avoid
>     // ArithmeticException when calc maxNodesPerRack at following logic.
>     if (numOfRacks <= 1 || totalNumOfReplicas <= 1) {
>       return new int[] {numOfReplicas, totalNumOfReplicas};
>     }
>     // If more racks than replicas, put one replica per rack.
>     if (totalNumOfReplicas < numOfRacks) {
>       return new int[] {numOfReplicas, 1};
>     }
>     // If more replicas than racks, evenly spread the replicas.
>     // This calculation rounds up.
>     int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
>     return new int[] {numOfReplicas, maxNodesPerRack};
>   } {code}
> int maxNodesPerRack = (totalNumOfReplicas - 1) / numOfRacks + 1;
> here will be called, where totalNumOfReplicas=9 and  numOfRacks=9  
>  
> When we decommission one dn which is only one node in its rack, the 
> chooseOnce() in BlockPlacementPolicyRackFaultTolerant::chooseTargetInOrder() 
> will throw NotEnoughReplicasException, but the exception will not be caught 
> and fail to fallback to chooseEvenlyFromRemainingRacks() function.
> When decommission, after choose targets, verifyBlockPlacement() function will 
> return the total rack number contains the invalid rack, and 
> BlockPlacementStatusDefault::isPlacementPolicySatisfied() will return false 
> and it will also cause decommission fail.
> {code:java}
>   public boolean isPlacementPolicySatisfied() {
>     return requiredRacks <= currentRacks || currentRacks >= totalRacks;
>   }
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to