[
https://issues.apache.org/jira/browse/YARN-7839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16343154#comment-16343154
]
Weiwei Yang commented on YARN-7839:
-----------------------------------
Hi [~asuresh]
OK, I think I was talking about the AppPlacementAllocator approach, because I
noticed \{{AppPlacementAllocator#getPreferredNodeIterator(CandidateNodeSet<N>
candidateNodeSet)}} in API, all I was thinking is to use placement constraint
to filter out such candidate nodes. For the processor approach, I do agree your
proposed approach can help, it won't create too much overhead as it only checks
in-memory data and doesn't hold any lock, it should help. I just could not tell
how much it helps on a real cluster.
> Check node capacity before placing in the Algorithm
> ---------------------------------------------------
>
> Key: YARN-7839
> URL: https://issues.apache.org/jira/browse/YARN-7839
> Project: Hadoop YARN
> Issue Type: Sub-task
> Reporter: Arun Suresh
> Priority: Major
>
> Currently, the Algorithm assigns a node to a requests purely based on if the
> constraints are met. It is later in the scheduling phase that the Queue
> capacity and Node capacity are checked. If the request cannot be placed
> because of unavailable Queue/Node capacity, the request is retried by the
> Algorithm.
> For clusters that are running at high utilization, we can reduce the retries
> if we perform the Node capacity check in the Algorithm as well. The Queue
> capacity check and the other user limit checks can still be handled by the
> scheduler (since queues and other limits are tied to the scheduler, and not
> scheduler agnostic)
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]