[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731273#comment-14731273
 ] 

Hudson commented on YARN-4105:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2273 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2273/])
YARN-4105. Capacity Scheduler headroom for DRF is wrong. Contributed by Chang 
Li (jlowe: rev 6eaca2e3634a88dc55689e8960352d6248c424d9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* hadoop-yarn-project/CHANGES.txt


> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.2
>
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.4.patch, 
> YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731234#comment-14731234
 ] 

Hudson commented on YARN-4105:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #335 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/335/])
YARN-4105. Capacity Scheduler headroom for DRF is wrong. Contributed by Chang 
Li (jlowe: rev 6eaca2e3634a88dc55689e8960352d6248c424d9)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java


> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.2
>
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.4.patch, 
> YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731203#comment-14731203
 ] 

Hudson commented on YARN-4105:
--

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #346 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/346/])
YARN-4105. Capacity Scheduler headroom for DRF is wrong. Contributed by Chang 
Li (jlowe: rev 6eaca2e3634a88dc55689e8960352d6248c424d9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java


> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.2
>
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.4.patch, 
> YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731157#comment-14731157
 ] 

Hudson commented on YARN-4105:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2295 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2295/])
YARN-4105. Capacity Scheduler headroom for DRF is wrong. Contributed by Chang 
Li (jlowe: rev 6eaca2e3634a88dc55689e8960352d6248c424d9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java


> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.2
>
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.4.patch, 
> YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14731103#comment-14731103
 ] 

Hudson commented on YARN-4105:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #1083 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1083/])
YARN-4105. Capacity Scheduler headroom for DRF is wrong. Contributed by Chang 
Li (jlowe: rev 6eaca2e3634a88dc55689e8960352d6248c424d9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* hadoop-yarn-project/CHANGES.txt


> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.2
>
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.4.patch, 
> YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730978#comment-14730978
 ] 

Hudson commented on YARN-4105:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #352 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/352/])
YARN-4105. Capacity Scheduler headroom for DRF is wrong. Contributed by Chang 
Li (jlowe: rev 6eaca2e3634a88dc55689e8960352d6248c424d9)
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java


> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.2
>
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.4.patch, 
> YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730966#comment-14730966
 ] 

Hudson commented on YARN-4105:
--

FAILURE: Integrated in Hadoop-trunk-Commit #8403 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8403/])
YARN-4105. Capacity Scheduler headroom for DRF is wrong. Contributed by Chang 
Li (jlowe: rev 6eaca2e3634a88dc55689e8960352d6248c424d9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* hadoop-yarn-project/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestCapacityScheduler.java


> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Fix For: 2.7.2
>
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.4.patch, 
> YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-04 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14730906#comment-14730906
 ] 

Jason Lowe commented on YARN-4105:
--

Test failures are unrelated.  Committing this.

> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.4.patch, 
> YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14728386#comment-14728386
 ] 

Hadoop QA commented on YARN-4105:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m  7s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  3s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 27s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 50s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 32s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |  52m 20s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | |  92m 19s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler |
| Timed out tests | 
org.apache.hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter
 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753876/YARN-4105.4.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d31a41c |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/8989/artifact/patchprocess/trunkFindbugsWarningshadoop-yarn-server-resourcemanager.html
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8989/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8989/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8989/console |


This message was automatically generated.

> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.4.patch, 
> YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14728183#comment-14728183
 ] 

Wangda Tan commented on YARN-4105:
--

Patch LGTM too, thanks [~lichangleo]. Only one nit: could you update test 
comment to be:
bq. // app 1 ask for 10GB memory and 1 vcore,
To
bq. // allocates 10GB memory and 1 vcore to app 1.

Same to app2.

> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14727758#comment-14727758
 ] 

Hadoop QA commented on YARN-4105:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m  1s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 53s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  7s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 51s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 27s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |  53m 55s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | |  93m 42s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestAllocationFileLoaderService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753789/YARN-4105.3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7d6687f |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/8980/artifact/patchprocess/trunkFindbugsWarningshadoop-yarn-server-resourcemanager.html
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8980/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8980/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8980/console |


This message was automatically generated.

> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.6.0
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4105.2.patch, YARN-4105.3.patch, YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14727547#comment-14727547
 ] 

Hadoop QA commented on YARN-4105:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  16m 23s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 58s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 15s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 52s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 31s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | yarn tests |  53m 51s | Tests passed in 
hadoop-yarn-server-resourcemanager. |
| | |  93m 19s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753776/YARN-4105.2.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7d6687f |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/8975/artifact/patchprocess/trunkFindbugsWarningshadoop-yarn-server-resourcemanager.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/8975/artifact/patchprocess/whitespace.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8975/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8975/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8975/console |


This message was automatically generated.

> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4105.2.patch, YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-02 Thread Chang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14727427#comment-14727427
 ] 

Chang Li commented on YARN-4105:


the failed unit test is not related to my change. Have run the test on my 
machine with my patch on and the test passed.

> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4105.2.patch, YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4105) Capacity Scheduler headroom for DRF is wrong

2015-09-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14726782#comment-14726782
 ] 

Hadoop QA commented on YARN-4105:
-

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | pre-patch |  17m  0s | Pre-patch trunk has 1 extant 
Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 47s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 58s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 51s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 10  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 26s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 29s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | yarn tests |  54m 17s | Tests failed in 
hadoop-yarn-server-resourcemanager. |
| | |  93m 49s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753664/YARN-4105.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 00804e2 |
| Pre-patch Findbugs warnings | 
https://builds.apache.org/job/PreCommit-YARN-Build/8963/artifact/patchprocess/trunkFindbugsWarningshadoop-yarn-server-resourcemanager.html
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/8963/artifact/patchprocess/whitespace.txt
 |
| hadoop-yarn-server-resourcemanager test log | 
https://builds.apache.org/job/PreCommit-YARN-Build/8963/artifact/patchprocess/testrun_hadoop-yarn-server-resourcemanager.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/8963/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/8963/console |


This message was automatically generated.

> Capacity Scheduler headroom for DRF is wrong
> 
>
> Key: YARN-4105
> URL: https://issues.apache.org/jira/browse/YARN-4105
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4105.patch
>
>
> relate to the problem discussed in YARN-1857. But the min method is flawed 
> when we are using DRC. Have run into a real scenario in production where 
> queueCapacity: , qconsumed:  vCores:361>, consumed:  limit:  vCores:755>.  headRoom calculation returns 88064 where there is only 1536 
> left in the queue because DRC effectively compare by vcores. It then caused 
> deadlock because RMcontainer allocator thought there is still space for 
> mapper and won't preempt a reducer in a full queue to schedule a mapper. 
> Propose fix with componentwiseMin. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)