sodonnel commented on a change in pull request #668: HDDS-3139 Pipeline placement should max out pipeline usage URL: https://github.com/apache/hadoop-ozone/pull/668#discussion_r395610027
########## File path: hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelinePlacementPolicy.java ########## @@ -315,6 +314,50 @@ DatanodeDetails fallBackPickNodes( return results; } + private DatanodeDetails randomPick(List<DatanodeDetails> healthyNodes) { + DatanodeDetails datanodeDetails; + int firstNodeNdx = getRand().nextInt(healthyNodes.size()); + int secondNodeNdx = getRand().nextInt(healthyNodes.size()); + + // There is a possibility that both numbers will be same. + // if that is so, we just return the node. + if (firstNodeNdx == secondNodeNdx) { + datanodeDetails = healthyNodes.get(firstNodeNdx); + } else { + DatanodeDetails firstNodeDetails = healthyNodes.get(firstNodeNdx); + DatanodeDetails secondNodeDetails = healthyNodes.get(secondNodeNdx); + datanodeDetails = nodeManager.getPipelinesCount(firstNodeDetails) + >= nodeManager.getPipelinesCount(secondNodeDetails) + ? secondNodeDetails : firstNodeDetails; + } + return datanodeDetails; + } + + private List<DatanodeDetails> getLowerLoadNodes( + List<DatanodeDetails> nodes, int num) { + int maxPipelineUsage = nodes.size() * heavyNodeCriteria / + HddsProtos.ReplicationFactor.THREE.getNumber(); + return nodes.stream() + // Skip the nodes which exceeds the load limit. + .filter(p -> nodeManager.getPipelinesCount(p) < num - maxPipelineUsage) + .collect(Collectors.toList()); + } + + private DatanodeDetails lowerLoadPick(List<DatanodeDetails> healthyNodes) { + int curPipelineCounts = stateManager + .getPipelines(HddsProtos.ReplicationType.RATIS).size(); + DatanodeDetails datanodeDetails; + List<DatanodeDetails> nodes = getLowerLoadNodes( + healthyNodes, curPipelineCounts); + if (nodes.isEmpty()) { + // random pick node if nodes load is at same level. + datanodeDetails = randomPick(healthyNodes); + } else { + datanodeDetails = nodes.stream().findFirst().get(); Review comment: Actually, thinking about this more - should this pick a random node from LowerLoadNodes, rather than the first node? We call `getLowerLoadNodes` which returns a list of nodes where the overloaded nodes are removed, but I don't think the list is sorted in anyway. The healthyNodes will will probably be naturally in the same order each time it is generated from the NodeManager. This means this lowerLoadPick method might return the same node on each call until it is overloaded. Then it would be excluded and the next node would be picked and so on. It would probably be better if we picked a random node from the less loaded nodes. Or, sort the list by load and return the first one so we are always picked the node with the least node. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services --------------------------------------------------------------------- To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org