Wangda Tan commented on YARN-2022:

Hi [~sunilg], thanks for you patch, I've looked at it, some comments,

    Map<ApplicationAttemptId,Set<RMContainer>> list =
        new HashMap<ApplicationAttemptId,Set<RMContainer>>();
It's better to rename it to preemptMap, it's not a list.

    if (Resources.lessThanOrEqual(rc, clusterResource, skippedAMSize, 
maxAMCapacity)) {
With this condition, container preemption will be interrupted when we have 
am-capacity reached maxAMCapacity or less, is it what the original design?
If it is, it is possible that user mis-setting maxAMCapacity (like 
maxAMCapacity is capacity of the queue), and a queue (say qA) has full of AMs, 
such AMs are all asking container. Assume there's another under-satisfied queue 
is asking resource, but nothing will be preempted from qA. We should take care 
of this case.
Any thoughts? [~curino], [~mayank_bansal].

Currently, it added a isMasterContainer field. We should make sure this field 
is properly set and works with changes of YARN-1368. You can take a look at 
AbstractYarnScheduler#recoverContainersOnNode. UT should be added for this 
corner case too.


> Preempting an Application Master container can be kept as least priority when 
> multiple applications are marked for preemption by 
> ProportionalCapacityPreemptionPolicy
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------
>                 Key: YARN-2022
>                 URL: https://issues.apache.org/jira/browse/YARN-2022
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>          Components: resourcemanager
>    Affects Versions: 2.4.0
>            Reporter: Sunil G
>            Assignee: Sunil G
>         Attachments: YARN-2022-DesignDraft.docx, YARN-2022.2.patch, 
> YARN-2022.3.patch, YARN-2022.4.patch, YARN-2022.5.patch, YARN-2022.6.patch, 
> Yarn-2022.1.patch
> Cluster Size = 16GB [2NM's]
> Queue A Capacity = 50%
> Queue B Capacity = 50%
> Consider there are 3 applications running in Queue A which has taken the full 
> cluster capacity. 
> J1 = 2GB AM + 1GB * 4 Maps
> J2 = 2GB AM + 1GB * 4 Maps
> J3 = 2GB AM + 1GB * 2 Maps
> Another Job J4 is submitted in Queue B [J4 needs a 2GB AM + 1GB * 2 Maps ].
> Currently in this scenario, Jobs J3 will get killed including its AM.
> It is better if AM can be given least priority among multiple applications. 
> In this same scenario, map tasks from J3 and J2 can be preempted.
> Later when cluster is free, maps can be allocated to these Jobs.

This message was sent by Atlassian JIRA

Reply via email to