siddhantsangwan commented on code in PR #4614:
URL: https://github.com/apache/ozone/pull/4614#discussion_r1179215202


##########
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/TestSCMContainerPlacementRackAware.java:
##########
@@ -588,4 +588,213 @@ public void testOutOfServiceNodesNotSelected(int 
datanodeCount) {
       dnInfos.get(index).setNodeStatus(new NodeStatus(DECOMMISSIONED, 
HEALTHY));
     }
   }
+
+  @ParameterizedTest
+  @ValueSource(ints = {NODE_PER_RACK + 1, 2 * NODE_PER_RACK + 1})
+  public void chooseNodeWithUsedNodesMultipleRack(int datanodeCount)
+      throws SCMException {
+    assumeTrue(datanodeCount > NODE_PER_RACK);
+    setup(datanodeCount);
+    int nodeNum = 1;
+    List<DatanodeDetails> excludedNodes = new ArrayList<>();
+    List<DatanodeDetails> usedNodes = new ArrayList<>();
+
+    // 2 replicas, two existing datanodes on same rack
+    usedNodes.add(datanodes.get(0));
+    usedNodes.add(datanodes.get(1));
+
+    List<DatanodeDetails> datanodeDetails = policy.chooseDatanodes(usedNodes,
+        excludedNodes, null, nodeNum, 0, 5);
+    Assertions.assertEquals(nodeNum, datanodeDetails.size());
+
+    // New DN should be on different rack than DN0 & DN1
+    Assertions.assertTrue(!cluster.isSameParent(
+        datanodes.get(0), datanodeDetails.get(0)) &&
+        !cluster.isSameParent(datanodes.get(1), datanodeDetails.get(0)));
+
+    // 2 replicas, two existing datanodes on different rack
+    usedNodes.clear();
+    // 1st Replica on rack0
+    usedNodes.add(datanodes.get(0));
+    // 2nd Replica on rack1
+    usedNodes.add(datanodes.get(5));
+
+    datanodeDetails = policy.chooseDatanodes(usedNodes,
+        excludedNodes, null, nodeNum, 0, 5);
+    Assertions.assertEquals(nodeNum, datanodeDetails.size());
+
+    // New replica should be either on rack0 or rack1
+    Assertions.assertTrue(cluster.isSameParent(
+        datanodes.get(0), datanodeDetails.get(0)) ||
+        cluster.isSameParent(datanodes.get(1), datanodeDetails.get(0)));

Review Comment:
   Should be `cluster.isSameParent(datanodes.get(5), datanodeDetails.get(0))`.



##########
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/placement/algorithms/SCMContainerPlacementRackAware.java:
##########
@@ -248,17 +434,34 @@ public DatanodeDetails chooseNode(List<DatanodeDetails> 
healthyNodes) {
    * @param excludedNodes - list of the datanodes to excluded. Can be null.
    * @param affinityNodes - the chosen nodes should be on the same rack as
    *                    affinityNodes. Can be null.
+   * @param usedNodes - the chosen nodes should be on the different rack
+   *                    than usedNodes rack when affinityNode is null.
    * @param dataSizeRequired - size required for the container.
    * @param metadataSizeRequired - size required for Ratis metadata.
    * @return List of chosen datanodes.
    * @throws SCMException  SCMException
    */
   private DatanodeDetails chooseNode(List<DatanodeDetails> excludedNodes,
-      List<DatanodeDetails> affinityNodes, long metadataSizeRequired,
+      List<DatanodeDetails> affinityNodes, List<DatanodeDetails> usedNodes,
+      long metadataSizeRequired,
       long dataSizeRequired) throws SCMException {
     int ancestorGen = RACK_LEVEL;
     int maxRetry = MAX_RETRY;
     List<String> excludedNodesForCapacity = null;
+
+    // When affinity node is null, in this case new node to be selected
+    // should be in different rack than used nodes rack.
+    // Exclude nodes should be just excluded from topology node selection,
+    // which is filled in excludedNodesForCapacity
+    // Used node rack should not be part of rack selection
+    // which is filled in excludedNodes

Review Comment:
   From this comment I understand that the new node should not be in the same 
rack as `usedNodes`. Then `excludeNodes` is set to `usedNodes`, and 
`excludeNodes` is passed to 
   ```
   networkTopology.chooseRandom(
                 NetConstants.ROOT, excludedNodesForCapacity, excludedNodes,
                 affinityNode, ancestorGen);
   ```
   
   But `chooseRandom`'s javadoc suggests that the 2nd parameter, 
`excludedScopes`, is the scope (rack?) that will be excluded and the 3rd 
parameter, `excludedNodes`, are just the nodes that will be excluded. 
`excludedScopes` vs `excludedNodes` is not clear to me.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to