[jira] [Comment Edited] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-20 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15591597#comment-15591597
 ] 

Sunil G edited comment on YARN-2009 at 10/20/16 11:47 AM:
--

I tested below case
{noformat}
String queuesConfig =
// guaranteed,max,used,pending,reserved
"root(=[100 100 55 170 0]);" + // root
"-a(=[40 100 10 50 0]);" + // a
"-b(=[60 100 45 120 0])"; // b

String appsConfig =
// queueName\t(priority,resource,host,expression,#repeat,reserved,pending)
"a\t" // app1 in a
+ "(1,1,n1,,5,false,25, user);" + // app1 a
"a\t" // app2 in a
+ "(2,1,n1,,5,false,25, user);" + // app2 a
"b\t" // app3 in b
+ "(4,1,n1,,40,false,20,user1);" + // app3 b
"b\t" // app1 in a
+ "(6,1,n1,,5,false,30,user2)";

verify(mDisp, times(16)).handle(argThat(
new TestProportionalCapacityPreemptionPolicy.IsPreemptionRequestFor(
getAppAttemptId(3;
verify(mDisp, never()).handle(argThat(
new TestProportionalCapacityPreemptionPolicy.IsPreemptionRequestFor(
getAppAttemptId(4;
{noformat}

Here queue 'b' has 3 users and 60% is the cap for this queue. So i am 
considering 20GB for each user (33.33 userlimit).

Given 20GB, preemption is happening for 16GB considering 4GB is already running 
for app3. But I am not seeing any extra preemption from same level app. May be 
could u share some more details.


was (Author: sunilg):
I tested below case
{noformat}
String queuesConfig =
// guaranteed,max,used,pending,reserved
"root(=[100 100 55 170 0]);" + // root
"-a(=[40 100 10 50 0]);" + // a
"-b(=[60 100 45 120 0])"; // b

String appsConfig =
// queueName\t(priority,resource,host,expression,#repeat,reserved,pending)
"a\t" // app1 in a
+ "(1,1,n1,,5,false,25, user);" + // app1 a
"a\t" // app2 in a
+ "(2,1,n1,,5,false,25, user);" + // app2 a
"b\t" // app3 in b
+ "(4,1,n1,,40,false,20,user1);" + // app3 b
"b\t" // app1 in a
+ "(6,1,n1,,5,false,30,user2)";
{noformat}

Here queue 'b' has 3 users and 60% is the cap for this queue. So i am 
considering 20GB for each user (33.33 userlimit).

Given 20GB, preemption is happening for 16GB considering 4GB is already running 
for app3. But I am not seeing any extra preemption from same level app. May be 
could u share some more details.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-14 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577129#comment-15577129
 ] 

Sunil G edited comment on YARN-2009 at 10/15/16 1:55 AM:
-

Thanks [~eepayne]. You are correct.
I also ran into a similar point yesterday and found root cause is because we 
subtract {{tmpApp.getAMUsed()}} in all cases. Ideally we must deduct it only 
when one app's resources are fully needed for preemption demand from other high 
apps. 

I found a solution to have something similar to 
{noformat}
  if (Resources.lessThan(rc, clusterResource,
Resources.subtract(tmpApp.getUsed(), preemtableFromApp),
tmpApp.getAMUsed())) {
Resources.subtractFrom(preemtableFromApp, tmpApp.getAMUsed());
  }
{noformat}

I think this can be placed in 
{{FifoIntraQueuePreemptionPlugin.validateOutSameAppPriorityFromDemand}}, so we 
can ensure that we will deduct AMUsed only when one app's resource is needed 
fully for preemption. Else we may not needed to consider the same. I am 
preparing for UTs also to cover this cases.


was (Author: sunilg):
Thanks [~eepayne]
I also ran into a similar point yesterday and found root cause is because we 
subtract {{tmpApp.getAMUsed()}}. 

I found a solution to have something similar to 
{{noformat}}
  if (Resources.lessThan(rc, clusterResource,
Resources.subtract(tmpApp.getUsed(), preemtableFromApp),
tmpApp.getAMUsed())) {
Resources.subtractFrom(preemtableFromApp, tmpApp.getAMUsed());
  }
{{noformat}}

I think this can be placed in 
{{FifoIntraQueuePreemptionPlugin.validateOutSameAppPriorityFromDemand}}, so we 
can ensure that we will deduct AMUsed only when one app's resource is needed 
fully for preemption. Else we may not needed to consider the same. I am 
preparing for UTs also to cover this cases.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-09-26 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15522161#comment-15522161
 ] 

Sunil G edited comment on YARN-2009 at 9/26/16 9:19 AM:


Thanks [~eepayne] for the detailed explanation. 

bq.when it comes time for the intra-queue preemption policy to preempt 
resources, it seems to me that the policy won't preempt enough resources.
Yes. I was also thinking on same line. If {{tq.used}} is less than 
{{tq.guaranteed}}, then we may hit less preemption if we use  
{{tq.guaranteed}}. Since *used* is less than *guaranteed*, there are enough 
resources available for this queue. So high priority apps will automatically 
get those resources by scheduler. 
But after thinking more, there is one case. Adding to the above mentioned 
scenario ({{tq.used}} is less than {{tq.guaranteed}}), assume another queue was 
over-utilizing resource, so current queue has to wait till inter-queue 
preemption to kick in (to get *guaranteed - used*). If inter-queue preemption 
is turned off, then current queue may not get those resources immediately. In 
such cases, I think we need to use  {{tq.used}}. I would like hear thoughts on 
this point. [~eepayne] and [~leftnoteasy]. pls share your thoughts.


was (Author: sunilg):
Thanks [~eepayne] for the detailed explanation. 

bq.when it comes time for the intra-queue preemption policy to preempt 
resources, it seems to me that the policy won't preempt enough resources.
Yes. I was also thinking on same line. If {{tq.used}} is less than 
{{tq.guaranteed}}, then we may hit less preemption if we use  
{{tq.guaranteed}}. Since *used* is less that *guaranteed*, there are enough 
resources for this queue. So high priority apps will automatically gets those 
resources by scheduler. 
But after thinking more, there is one scenario. Adding to the same scenario, 
assume another queue was over-utilizing resource, so current queue has to wait 
till inter-queue preemption to kick in. (If inter-queue preemption is turned 
off, then current queue may not get those resources immediately. In such cases, 
I think we need to use  {{tq.used}}. I would like hear thoughts on this point. 
[~eepayne] and [~leftnoteasy]. pls share your thoughts.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org