[jira] [Comment Edited] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189056#comment-16189056
 ] 

Yufei Gu edited comment on YARN-2162 at 10/2/17 11:35 PM:
--

Run SLS with 400 nodes, 200 apps, and 2k containers. Both base line and patch 
YARN-2162 runs 20 times. Result is in uploaded file 
test-400nm-200app-2k_NODE_UPDATE.timecost.svg. Please ignore the "Resource Type 
0", it is the yarn-2162 patch actually. No obvious performance regression. The 
patch version even performs a little better due to normal fluctuation.


was (Author: yufeigu):
Run SLS with 400 nodes, 200 apps, and 2k containers. Both base line and patch 
YARN-2162 runs 20 times. 
!test-400nm-200app-2k_NODE_UPDATE.timecost.svg|thumbnail!. Please ignore the 
"Resource Type 0", it is the yarn-2162 patch actually. Don't see obvious 
performance regression. The patch version even performs a little better due to 
normal fluctuation.

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-10-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16189056#comment-16189056
 ] 

Yufei Gu edited comment on YARN-2162 at 10/2/17 11:34 PM:
--

Run SLS with 400 nodes, 200 apps, and 2k containers. Both base line and patch 
YARN-2162 runs 20 times. 
!test-400nm-200app-2k_NODE_UPDATE.timecost.svg|thumbnail!. Please ignore the 
"Resource Type 0", it is the yarn-2162 patch actually. Don't see obvious 
performance regression. The patch version even performs a little better due to 
normal fluctuation.


was (Author: yufeigu):
Run SLS with 400 nodes, 200 apps, and 2k containers. Both base line and patch 
YARN-2162 runs 20 times. !attachment-name.jpg|thumbnail!. Please ignore the 
"Resource Type 0", it is the yarn-2162 patch actually. Don't see obvious 
performance regression. The patch version even performs a little better due to 
normal fluctuation.

> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: test-400nm-200app-2k_NODE_UPDATE.timecost.svg, 
> YARN-2162.001.patch, YARN-2162.002.patch, YARN-2162.003.patch, 
> YARN-2162.004.patch, YARN-2162.005.patch, YARN-2162.006.patch, 
> YARN-2162.007.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-2162) add ability in Fair Scheduler to optionally configure maxResources in terms of percentage

2017-09-25 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16178690#comment-16178690
 ] 

Yufei Gu edited comment on YARN-2162 at 9/25/17 7:59 AM:
-

Thanks for the review, [~templedf]. Uploaded patch v6 for your comments.
- 1. Add the check in class {{AllocationFileLoaderService}} .
- 2. Deleted
- 3. Done except for policy.
- 4. FairScheduler#getResourceCalculator() returns null because FairScheduler 
is mocked.  
- 5. What I am thinking is using 'cpu' is better than using 'vcores'. We could
   consider 'vcores' as the unit of cpu just as 'mb' is the unit of memory.
   Besides, it is a little confusing between 2 vcores and 50% vcores. Is 50%
   vcores equals to 0.5 vcores?
- 6. Cannot drop last white space since we want to support case like this "50%
   memory"; it may be more readable if we have that parentheses; make it a '*'
   since we wanna support case like "50. % memory".
- 7. Add the negative tests.
- 8. Add tests for class ConfigurableResource 
- 9. Let's try some perf tests on SLS later.



was (Author: yufeigu):
Thanks for the review, [~templedf].
- 1. Add the check in class {{AllocationFileLoaderService}} .
- 2. Deleted
- 3. Done except for policy.
- 4. FairScheduler#getResourceCalculator() returns null because FairScheduler 
is mocked.  
- 5. What I am thinking is using 'cpu' is better than using 'vcores'. We could
   consider 'vcores' as the unit of cpu just as 'mb' is the unit of memory.
   Besides, it is a little confusing between 2 vcores and 50% vcores. Is 50%
   vcores equals to 0.5 vcores?
- 6. Cannot drop last white space since we want to support case like this "50%
   memory"; it may be more readable if we have that parentheses; make it a '*'
   since we wanna support case like "50. % memory".
- 7. Add the negative tests.
- 8. Add tests for class ConfigurableResource 
- 9. Let's try some perf tests on SLS later.


> add ability in Fair Scheduler to optionally configure maxResources in terms 
> of percentage
> -
>
> Key: YARN-2162
> URL: https://issues.apache.org/jira/browse/YARN-2162
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, scheduler
>Reporter: Ashwin Shankar
>Assignee: Yufei Gu
>  Labels: scheduler
> Attachments: YARN-2162.001.patch, YARN-2162.002.patch, 
> YARN-2162.003.patch, YARN-2162.004.patch, YARN-2162.005.patch, 
> YARN-2162.006.patch
>
>
> minResources and maxResources in fair scheduler configs are expressed in 
> terms of absolute numbers X mb, Y vcores. 
> As a result, when we expand or shrink our hadoop cluster, we need to 
> recalculate and change minResources/maxResources accordingly, which is pretty 
> inconvenient.
> We can circumvent this problem if we can optionally configure these 
> properties in terms of percentage of cluster capacity. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org