sodonnel commented on code in PR #5726:
URL: https://github.com/apache/ozone/pull/5726#discussion_r1426582091


##########
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java:
##########
@@ -445,6 +445,7 @@ public ContainerPlacementStatus validateContainerPlacement(
       }
     }
     List<Integer> currentRackCount = new ArrayList<>(dns.stream()
+        .filter(d -> !(d.isDecommissioned()))

Review Comment:
   What I am saying is that we should not filter nodes here, but adjust the 
max-nodes-per-rack higher if there are more replicas passed in. There is the 
decommission scenario, maintenance, plus there are scenarios in replication 
manager where we maintain extra copies of unhealthy replicas too, so you could 
get 3 x in_service plus 1 or more unhealthy.
   
   If you have 3 replicas, then the expectation is 2 racks and a max of 2 per 
rack. But what if there are 4 replicas for rep-factor 3. Then you need 2 or 3 
racks. Plus at most 3 per rack.
   
   So if you adjust the max per rack higher based on the number of replicas 
available, then I think it would fix the problem.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to