Xushaohong commented on code in PR #4939:
URL: https://github.com/apache/ozone/pull/4939#discussion_r1242074291


##########
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/block/SCMBlockDeletingService.java:
##########
@@ -273,4 +279,24 @@ public void stop() {
   public ScmBlockDeletingServiceMetrics getMetrics() {
     return this.metrics;
   }
+
+  /**
+   * Filters and returns a set of healthy datanodes that have not exceeded
+   * the deleteBlocksPendingCommandLimit.
+   *
+   * @param datanodes a list of DatanodeDetails
+   * @return a set of filtered DatanodeDetails
+   */
+  @VisibleForTesting
+  protected Set<DatanodeDetails> getDatanodesWithinCommandLimit(
+      List<DatanodeDetails> datanodes) throws NodeNotFoundException {
+    final Set<DatanodeDetails> included = new HashSet<>();
+    for (DatanodeDetails dn : datanodes) {
+      if (nodeManager.getTotalDatanodeCommandCount(dn,

Review Comment:
   The basic idea looks good, but I have a concern that the queued cmd size 
from the last dn heartbeat has a potential delay.  If the delete operation is 
frequent, there might be a case that the service cannot find idle data nodes to 
send cmd until the next heartbeat comes. The deletion process might not be 
smooth as you expect.
   Pls correct me if i am wrong.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to