[ https://issues.apache.org/jira/browse/YARN-9278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16776536#comment-16776536 ]
Zhaohui Xin commented on YARN-9278: ----------------------------------- Thanks for your suggestions, [~wilfreds]. I also think it's better to randomize nodes when the number of nodes exceeds a certain threshold. Maybe our change like this, {code:java} List<FSSchedulerNode> potentialNodes = scheduler.getNodeTracker() .getNodesByResourceName(rr.getResourceName()); int maxTryNodeNumOnce = conf.getMaxTryNodeNumOnce(); // we should not iterate all nodes, that will be very slow if (ResourceRequest.ANY.equals(rr.getResourceName()) && potentialNodes.size() > maxTryNodeNumOnce) { Collections.shuffle(potentialNodes); potentialNodes = potentialNodes.subList(0, maxTryNodeNumOnce); } {code} > Shuffle nodes when selecting to be preempted nodes > -------------------------------------------------- > > Key: YARN-9278 > URL: https://issues.apache.org/jira/browse/YARN-9278 > Project: Hadoop YARN > Issue Type: Sub-task > Components: fairscheduler > Reporter: Zhaohui Xin > Assignee: Zhaohui Xin > Priority: Major > > We should *shuffle* the nodes to avoid some nodes being preempted frequently. > Also, we should *limit* the num of nodes to make preemption more efficient. > Just like this, > {code:java} > // we should not iterate all nodes, that will be very slow > long maxTryNodeNum = > context.getPreemptionConfig().getToBePreemptedNodeMaxNumOnce(); > if (potentialNodes.size() > maxTryNodeNum){ > Collections.shuffle(potentialNodes); > List<FSSchedulerNode> newPotentialNodes = new ArrayList<FSSchedulerNode>(); > for (int i = 0; i < maxTryNodeNum; i++){ > newPotentialNodes.add(potentialNodes.get(i)); > } > potentialNodes = newPotentialNodes; > {code} > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org