[ 
https://issues.apache.org/jira/browse/YARN-3627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541201#comment-14541201
 ] 

Bibin A Chundatt commented on YARN-3627:
----------------------------------------

[~kasha] seems related to YARN-3405 .Will try the patch soon. Would be great if 
 YARN-3405 gets resolved.

> Preemption not triggered in Fair scheduler when maxResources is set on parent 
> queue
> -----------------------------------------------------------------------------------
>
>                 Key: YARN-3627
>                 URL: https://issues.apache.org/jira/browse/YARN-3627
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: fairscheduler, scheduler
>         Environment: Suse 11 SP3, 2 NM 
>            Reporter: Bibin A Chundatt
>
> Consider the below scenario of fair configuration 
>  
> Root (10Gb cluster resource)
> --Q1 (maxResources  4gb) 
> ----Q1.1 (maxResources 4gb) 
> ----Q1.2  (maxResources  4gb)         
> --Q2 (maxResources 6GB)
>  
> No applications are running in Q2
>  
> Submit one application with to Q1.1 with 50 maps  & 4Gb gets allocated to Q1.1
> Now submit application to  Q1.2 the same will be starving for memory always.
>  
> Preemption will never get triggered since 
> yarn.scheduler.fair.preemption.cluster-utilization-threshold =.8 and the 
> cluster utilization is below .8.
>  
> *Fairscheduler.java*
> {code}
>   private boolean shouldAttemptPreemption() {
>     if (preemptionEnabled) {
>       return (preemptionUtilizationThreshold < Math.max(
>           (float) rootMetrics.getAllocatedMB() / clusterResource.getMemory(),
>           (float) rootMetrics.getAllocatedVirtualCores() /
>               clusterResource.getVirtualCores()));
>     }
>     return false;
>   }
> {code}
> Are we supposed to configure in running cluster maxResources  <0mb and 0 
> cores > so that all queues can take full cluster resources always if 
> available??



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to