[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2016-05-12 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15281812#comment-15281812
 ] 

Yufei Gu commented on YARN-3126:


It is fixed by YARN-3655. So I just close it.

> FairScheduler: queue's usedResource is always more than the maxResource limit
> -
>
> Key: YARN-3126
> URL: https://issues.apache.org/jira/browse/YARN-3126
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0
> Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
>Reporter: Xia Hu
>Assignee: Yufei Gu
>  Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources
> Fix For: trunk-win
>
> Attachments: resourcelimit-02.patch, resourcelimit-test.patch, 
> resourcelimit.patch
>
>
> When submitting spark application(both spark-on-yarn-cluster and 
> spark-on-yarn-cleint model), the queue's usedResources assigned by 
> fairscheduler always can be more than the queue's maxResources limit.
> And by reading codes of fairscheduler, I suppose this issue happened because 
> of ignore to check the request resources when assign Container.
> Here is the detail:
> 1. choose a queue. In this process, it will check if queue's usedResource is 
> bigger than its max, with assignContainerPreCheck. 
> 2. then choose a app in the certain queue. 
> 3. then choose a container. And here is the question, there is no check 
> whether this container would make the queue sources over its max limit. If a 
> queue's usedResource is 13G, the maxResource limit is 16G, then a container 
> which asking for 4G resources may be assigned successful. 
> This problem will always happen in spark application, cause we can ask for 
> different container resources in different applications. 
> By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2016-05-02 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15266972#comment-15266972
 ] 

Yufei Gu commented on YARN-3126:


Since it has been unassigned for a while, I am gonna take this. Hi [~Xia Hu], 
please let me know if you are still working on this.

> FairScheduler: queue's usedResource is always more than the maxResource limit
> -
>
> Key: YARN-3126
> URL: https://issues.apache.org/jira/browse/YARN-3126
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0
> Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
>Reporter: Xia Hu
>  Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources
> Fix For: trunk-win
>
> Attachments: resourcelimit-02.patch, resourcelimit-test.patch, 
> resourcelimit.patch
>
>
> When submitting spark application(both spark-on-yarn-cluster and 
> spark-on-yarn-cleint model), the queue's usedResources assigned by 
> fairscheduler always can be more than the queue's maxResources limit.
> And by reading codes of fairscheduler, I suppose this issue happened because 
> of ignore to check the request resources when assign Container.
> Here is the detail:
> 1. choose a queue. In this process, it will check if queue's usedResource is 
> bigger than its max, with assignContainerPreCheck. 
> 2. then choose a app in the certain queue. 
> 3. then choose a container. And here is the question, there is no check 
> whether this container would make the queue sources over its max limit. If a 
> queue's usedResource is 13G, the maxResource limit is 16G, then a container 
> which asking for 4G resources may be assigned successful. 
> This problem will always happen in spark application, cause we can ask for 
> different container resources in different applications. 
> By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2016-04-20 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251323#comment-15251323
 ] 

Karthik Kambatla commented on YARN-3126:


Commenting without looking into the related code. Thoughts:
# We should check it in one place. In other words, we should not duplicate the 
checks neither should we disperse it along multiple code paths.
# Would it be possible to pass this maxResources to assignContainer, so we 
could check either directly in assignContainer or assignContainerPreCheck? 

> FairScheduler: queue's usedResource is always more than the maxResource limit
> -
>
> Key: YARN-3126
> URL: https://issues.apache.org/jira/browse/YARN-3126
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0
> Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
>Reporter: Xia Hu
>  Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources
> Fix For: trunk-win
>
> Attachments: resourcelimit-02.patch, resourcelimit-test.patch, 
> resourcelimit.patch
>
>
> When submitting spark application(both spark-on-yarn-cluster and 
> spark-on-yarn-cleint model), the queue's usedResources assigned by 
> fairscheduler always can be more than the queue's maxResources limit.
> And by reading codes of fairscheduler, I suppose this issue happened because 
> of ignore to check the request resources when assign Container.
> Here is the detail:
> 1. choose a queue. In this process, it will check if queue's usedResource is 
> bigger than its max, with assignContainerPreCheck. 
> 2. then choose a app in the certain queue. 
> 3. then choose a container. And here is the question, there is no check 
> whether this container would make the queue sources over its max limit. If a 
> queue's usedResource is 13G, the maxResource limit is 16G, then a container 
> which asking for 4G resources may be assigned successful. 
> This problem will always happen in spark application, cause we can ask for 
> different container resources in different applications. 
> By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2016-04-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15242301#comment-15242301
 ] 

Hadoop QA commented on YARN-3126:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} YARN-3126 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12733746/resourcelimit-test.patch
 |
| JIRA Issue | YARN-3126 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11091/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> FairScheduler: queue's usedResource is always more than the maxResource limit
> -
>
> Key: YARN-3126
> URL: https://issues.apache.org/jira/browse/YARN-3126
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0
> Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
>Reporter: Xia Hu
>  Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources
> Fix For: trunk-win
>
> Attachments: resourcelimit-02.patch, resourcelimit-test.patch, 
> resourcelimit.patch
>
>
> When submitting spark application(both spark-on-yarn-cluster and 
> spark-on-yarn-cleint model), the queue's usedResources assigned by 
> fairscheduler always can be more than the queue's maxResources limit.
> And by reading codes of fairscheduler, I suppose this issue happened because 
> of ignore to check the request resources when assign Container.
> Here is the detail:
> 1. choose a queue. In this process, it will check if queue's usedResource is 
> bigger than its max, with assignContainerPreCheck. 
> 2. then choose a app in the certain queue. 
> 3. then choose a container. And here is the question, there is no check 
> whether this container would make the queue sources over its max limit. If a 
> queue's usedResource is 13G, the maxResource limit is 16G, then a container 
> which asking for 4G resources may be assigned successful. 
> This problem will always happen in spark application, cause we can ask for 
> different container resources in different applications. 
> By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2016-04-14 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15242298#comment-15242298
 ] 

Tao Jie commented on YARN-3126:
---

I think this issue is quite common, and we have met the same problem.
The root cause is that when we should make the max-limitation check in 
assignment, we should compare *current usage* + *resource to assign* with *max 
resource limit*. However when have resource to assign to a queue, we know only 
*current resource usage* and *max resource limit*, we don't know *resource to 
assign* until we assign resource to an appAttempt.
This patch seems add a additional check(checkQueueResourceLimit) on *leaf 
queue* then assign to AppAttempt, but *parent queue* resource usage may still 
over max resource limit.
Also we already have *FSQueue.assignContainerPreCheck* for max resource limit. 
If we add a new check, the former one seems to be unnecessary here.
[~kasha], would like to hear your thoughts.

> FairScheduler: queue's usedResource is always more than the maxResource limit
> -
>
> Key: YARN-3126
> URL: https://issues.apache.org/jira/browse/YARN-3126
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.3.0
> Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
>Reporter: Xia Hu
>  Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources
> Fix For: trunk-win
>
> Attachments: resourcelimit-02.patch, resourcelimit-test.patch, 
> resourcelimit.patch
>
>
> When submitting spark application(both spark-on-yarn-cluster and 
> spark-on-yarn-cleint model), the queue's usedResources assigned by 
> fairscheduler always can be more than the queue's maxResources limit.
> And by reading codes of fairscheduler, I suppose this issue happened because 
> of ignore to check the request resources when assign Container.
> Here is the detail:
> 1. choose a queue. In this process, it will check if queue's usedResource is 
> bigger than its max, with assignContainerPreCheck. 
> 2. then choose a app in the certain queue. 
> 3. then choose a container. And here is the question, there is no check 
> whether this container would make the queue sources over its max limit. If a 
> queue's usedResource is 13G, the maxResource limit is 16G, then a container 
> which asking for 4G resources may be assigned successful. 
> This problem will always happen in spark application, cause we can ask for 
> different container resources in different applications. 
> By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2015-05-19 Thread Xia Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14550007#comment-14550007
 ] 

Xia Hu commented on YARN-3126:
--

I have submitted a unit test just now, review it again, thx~

 FairScheduler: queue's usedResource is always more than the maxResource limit
 -

 Key: YARN-3126
 URL: https://issues.apache.org/jira/browse/YARN-3126
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.3.0
 Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
Reporter: Xia Hu
  Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources
 Fix For: trunk-win

 Attachments: resourcelimit-02.patch, resourcelimit-test.patch, 
 resourcelimit.patch


 When submitting spark application(both spark-on-yarn-cluster and 
 spark-on-yarn-cleint model), the queue's usedResources assigned by 
 fairscheduler always can be more than the queue's maxResources limit.
 And by reading codes of fairscheduler, I suppose this issue happened because 
 of ignore to check the request resources when assign Container.
 Here is the detail:
 1. choose a queue. In this process, it will check if queue's usedResource is 
 bigger than its max, with assignContainerPreCheck. 
 2. then choose a app in the certain queue. 
 3. then choose a container. And here is the question, there is no check 
 whether this container would make the queue sources over its max limit. If a 
 queue's usedResource is 13G, the maxResource limit is 16G, then a container 
 which asking for 4G resources may be assigned successful. 
 This problem will always happen in spark application, cause we can ask for 
 different container resources in different applications. 
 By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2015-05-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14550119#comment-14550119
 ] 

Hadoop QA commented on YARN-3126:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 20s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 44s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 3  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | findbugs |   1m 16s | The patch appears to introduce 1 
new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |  60m 19s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | |  77m 34s | |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-server-resourcemanager |
|  |  Inconsistent synchronization of 
org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore.isHDFS;
 locked 66% of time  Unsynchronized access at FileSystemRMStateStore.java:66% 
of time  Unsynchronized access at FileSystemRMStateStore.java:[line 156] |
| Timed out tests | 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12733746/resourcelimit-test.patch
 |
| Optional Tests | javac unit findbugs checkstyle |
| git revision | trunk / 93972a3 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/7993/artifact/patchprocess/whitespace.txt
 |
| Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/7993/artifact/patchprocess/newPatchFindbugsWarningshadoop-yarn-server-resourcemanager.html
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/7993/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/7993/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/7993/console |


This message was automatically generated.

 FairScheduler: queue's usedResource is always more than the maxResource limit
 -

 Key: YARN-3126
 URL: https://issues.apache.org/jira/browse/YARN-3126
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.3.0
 Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
Reporter: Xia Hu
  Labels: BB2015-05-TBR, assignContainer, fairscheduler, resources
 Fix For: trunk-win

 Attachments: resourcelimit-02.patch, resourcelimit-test.patch, 
 resourcelimit.patch


 When submitting spark application(both spark-on-yarn-cluster and 
 spark-on-yarn-cleint model), the queue's usedResources assigned by 
 fairscheduler always can be more than the queue's maxResources limit.
 And by reading codes of fairscheduler, I suppose this issue happened because 
 of ignore to check the request resources when assign Container.
 Here is the detail:
 1. choose a queue. In this process, it will check if queue's usedResource is 
 bigger than its max, with assignContainerPreCheck. 
 2. then choose a app in the certain queue. 
 3. then choose a container. And here is the question, there is no check 
 whether this container would make the queue sources over its max limit. If a 
 queue's usedResource is 13G, the maxResource limit is 16G, then a container 
 which asking for 4G resources may be assigned successful. 
 This problem will always happen in spark application, cause we can ask for 
 different container resources in different applications. 
 By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2015-05-01 Thread Craig Welch (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14523764#comment-14523764
 ] 

Craig Welch commented on YARN-3126:
---

Hi [~Xia Hu], thanks for putting together a patch for this.  Could you add some 
unit tests to verify the fix?  

 FairScheduler: queue's usedResource is always more than the maxResource limit
 -

 Key: YARN-3126
 URL: https://issues.apache.org/jira/browse/YARN-3126
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.3.0
 Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
Reporter: Xia Hu
  Labels: assignContainer, fairscheduler, resources
 Fix For: trunk-win

 Attachments: resourcelimit-02.patch, resourcelimit.patch


 When submitting spark application(both spark-on-yarn-cluster and 
 spark-on-yarn-cleint model), the queue's usedResources assigned by 
 fairscheduler always can be more than the queue's maxResources limit.
 And by reading codes of fairscheduler, I suppose this issue happened because 
 of ignore to check the request resources when assign Container.
 Here is the detail:
 1. choose a queue. In this process, it will check if queue's usedResource is 
 bigger than its max, with assignContainerPreCheck. 
 2. then choose a app in the certain queue. 
 3. then choose a container. And here is the question, there is no check 
 whether this container would make the queue sources over its max limit. If a 
 queue's usedResource is 13G, the maxResource limit is 16G, then a container 
 which asking for 4G resources may be assigned successful. 
 This problem will always happen in spark application, cause we can ask for 
 different container resources in different applications. 
 By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2015-03-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14370767#comment-14370767
 ] 

Hadoop QA commented on YARN-3126:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12697432/resourcelimit-02.patch
  against trunk revision e37ca22.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.
See 
https://builds.apache.org/job/PreCommit-YARN-Build/7035//artifact/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager.

Test results: 
https://builds.apache.org/job/PreCommit-YARN-Build/7035//testReport/
Console output: https://builds.apache.org/job/PreCommit-YARN-Build/7035//console

This message is automatically generated.

 FairScheduler: queue's usedResource is always more than the maxResource limit
 -

 Key: YARN-3126
 URL: https://issues.apache.org/jira/browse/YARN-3126
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.3.0
 Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
Reporter: Xia Hu
  Labels: assignContainer, fairscheduler, resources
 Fix For: trunk-win

 Attachments: resourcelimit-02.patch, resourcelimit.patch


 When submitting spark application(both spark-on-yarn-cluster and 
 spark-on-yarn-cleint model), the queue's usedResources assigned by 
 fairscheduler always can be more than the queue's maxResources limit.
 And by reading codes of fairscheduler, I suppose this issue happened because 
 of ignore to check the request resources when assign Container.
 Here is the detail:
 1. choose a queue. In this process, it will check if queue's usedResource is 
 bigger than its max, with assignContainerPreCheck. 
 2. then choose a app in the certain queue. 
 3. then choose a container. And here is the question, there is no check 
 whether this container would make the queue sources over its max limit. If a 
 queue's usedResource is 13G, the maxResource limit is 16G, then a container 
 which asking for 4G resources may be assigned successful. 
 This problem will always happen in spark application, cause we can ask for 
 different container resources in different applications. 
 By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2015-02-06 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14309865#comment-14309865
 ] 

Wei Yan commented on YARN-3126:
---

[~Xia Hu], I checked the latest trunk version. The problem is still there. 
Could u rebase a patch for the trunk? Normally we fix the problem in trunk, 
instead of previous released version. And we may need to get YARN-2083 
committed firstly.
Hey, [~kasha], do u have time look YARN-2083?

 FairScheduler: queue's usedResource is always more than the maxResource limit
 -

 Key: YARN-3126
 URL: https://issues.apache.org/jira/browse/YARN-3126
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.3.0
 Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
Reporter: Xia Hu
  Labels: assignContainer, fairscheduler, resources
 Attachments: resourcelimit.patch


 When submitting spark application(both spark-on-yarn-cluster and 
 spark-on-yarn-cleint model), the queue's usedResources assigned by 
 fairscheduler always can be more than the queue's maxResources limit.
 And by reading codes of fairscheduler, I suppose this issue happened because 
 of ignore to check the request resources when assign Container.
 Here is the detail:
 1. choose a queue. In this process, it will check if queue's usedResource is 
 bigger than its max, with assignContainerPreCheck. 
 2. then choose a app in the certain queue. 
 3. then choose a container. And here is the question, there is no check 
 whether this container would make the queue sources over its max limit. If a 
 queue's usedResource is 13G, the maxResource limit is 16G, then a container 
 which asking for 4G resources may be assigned successful. 
 This problem will always happen in spark application, cause we can ask for 
 different container resources in different applications. 
 By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3126) FairScheduler: queue's usedResource is always more than the maxResource limit

2015-02-04 Thread Wei Yan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306630#comment-14306630
 ] 

Wei Yan commented on YARN-3126:
---

[~Xia Hu], does this problem still exist in trunk version?

 FairScheduler: queue's usedResource is always more than the maxResource limit
 -

 Key: YARN-3126
 URL: https://issues.apache.org/jira/browse/YARN-3126
 Project: Hadoop YARN
  Issue Type: Bug
  Components: fairscheduler
Affects Versions: 2.3.0
 Environment: hadoop2.3.0. fair scheduler. spark 1.1.0. 
Reporter: Xia Hu
  Labels: assignContainer, fairscheduler, resources
 Attachments: resourcelimit.patch


 When submitting spark application(both spark-on-yarn-cluster and 
 spark-on-yarn-cleint model), the queue's usedResources assigned by 
 fairscheduler always can be more than the queue's maxResources limit.
 And by reading codes of fairscheduler, I suppose this issue happened because 
 of ignore to check the request resources when assign Container.
 Here is the detail:
 1. choose a queue. In this process, it will check if queue's usedResource is 
 bigger than its max, with assignContainerPreCheck. 
 2. then choose a app in the certain queue. 
 3. then choose a container. And here is the question, there is no check 
 whether this container would make the queue sources over its max limit. If a 
 queue's usedResource is 13G, the maxResource limit is 16G, then a container 
 which asking for 4G resources may be assigned successful. 
 This problem will always happen in spark application, cause we can ask for 
 different container resources in different applications. 
 By the way, I have already use the patch from YARN-2083. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)