[ 
https://issues.apache.org/jira/browse/YARN-8179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16458432#comment-16458432
 ] 

kyungwan nam commented on YARN-8179:
------------------------------------

Attached a new patch 003, which is based on up-to-date trunk.

> Preemption does not happen due to natural_termination_factor when DRF is used
> -----------------------------------------------------------------------------
>
>                 Key: YARN-8179
>                 URL: https://issues.apache.org/jira/browse/YARN-8179
>             Project: Hadoop YARN
>          Issue Type: Bug
>            Reporter: kyungwan nam
>            Assignee: kyungwan nam
>            Priority: Major
>         Attachments: YARN-8179.001.patch, YARN-8179.002.patch, 
> YARN-8179.003.patch
>
>
> cluster
> * DominantResourceCalculator
> * QueueA : 50 (capacity) ~ 100 (max capacity)
> * QueueB : 50 (capacity) ~ 50 (max capacity)
> all of resources have been allocated to QueueA. (all Vcores are allocated to 
> QueueA)
> if App1 is submitted to QueueB, over-utilized QueueA should be preempted.
> but, I’ve met the problem, which preemption does not happen. it caused that 
> App1 AM can not allocated.
> when App1 is submitted, pending resources for asking App1 AM would be 
> <Memory:2048, Vcores:1>
> so, Vcores which need to be preempted from QueueB should be 1.
> but, it can be 0 due to natural_termination_factor (default is 0.2)
> we should guarantee that resources not to be 0 even though applying 
> natural_termination_factor



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org

Reply via email to