[jira] [Comment Edited] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222734#comment-15222734
 ] 

Karthik Kambatla edited comment on YARN-4807 at 4/2/16 5:49 AM:


For the tests that are failing because the insufficient sleep time, can we (1) 
manually add sleeps so the tests pass, (2) file a follow-up JIRA to fix the 
test the right way and (3) add a TODO in the code annotated with the JIRA. 
(e.g. // TODO (YARN-wxyz))


was (Author: kasha):
For the tests that are failing because the insufficient sleep time, can we (1) 
manually add sleeps so the tests pass, (2) file a follow-up JIRA to fix the 
test the right way and (3) add a TODO in the code annotated with the JIRA. 
(e.g. // TODO (YARN-wxyz):)

> MockAM#waitForState sleep duration is too long
> --
>
> Key: YARN-4807
> URL: https://issues.apache.org/jira/browse/YARN-4807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>  Labels: newbie
> Attachments: YARN-4807.001.patch, YARN-4807.002.patch, 
> YARN-4807.003.patch, YARN-4807.004.patch, YARN-4807.005.patch, 
> YARN-4807.006.patch, YARN-4807.007.patch
>
>
> MockAM#waitForState sleep duration (500 ms) is too long. Also, there is 
> significant duplication with MockRM#waitForState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222734#comment-15222734
 ] 

Karthik Kambatla commented on YARN-4807:


For the tests that are failing because the insufficient sleep time, can we (1) 
manually add sleeps so the tests pass, (2) file a follow-up JIRA to fix the 
test the right way and (3) add a TODO in the code annotated with the JIRA. 
(e.g. // TODO (YARN-wxyz):)

> MockAM#waitForState sleep duration is too long
> --
>
> Key: YARN-4807
> URL: https://issues.apache.org/jira/browse/YARN-4807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>  Labels: newbie
> Attachments: YARN-4807.001.patch, YARN-4807.002.patch, 
> YARN-4807.003.patch, YARN-4807.004.patch, YARN-4807.005.patch, 
> YARN-4807.006.patch, YARN-4807.007.patch
>
>
> MockAM#waitForState sleep duration (500 ms) is too long. Also, there is 
> significant duplication with MockRM#waitForState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4746) yarn web services should convert parse failures of appId to 400

2016-04-01 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4746:
---
Attachment: 0005-YARN-4746.patch

> yarn web services should convert parse failures of appId to 400
> ---
>
> Key: YARN-4746
> URL: https://issues.apache.org/jira/browse/YARN-4746
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: 0001-YARN-4746.patch, 0002-YARN-4746.patch, 
> 0003-YARN-4746.patch, 0003-YARN-4746.patch, 0004-YARN-4746.patch, 
> 0005-YARN-4746.patch
>
>
> I'm seeing somewhere in the WS API tests of mine an error with exception 
> conversion of  a bad app ID sent in as an argument to a GET. I know it's in 
> ATS, but a scan of the core RM web services implies a same problem
> {{WebServices.parseApplicationId()}} uses {{ConverterUtils.toApplicationId}} 
> to convert an argument; this throws IllegalArgumentException, which is then 
> handled somewhere by jetty as a 500 error.
> In fact, it's a bad argument, which should be handled by returning a 400. 
> This can be done by catching the raised argument and explicitly converting it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4746) yarn web services should convert parse failures of appId to 400

2016-04-01 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4746:
---
Attachment: (was: 0005-YARN-4746.patch)

> yarn web services should convert parse failures of appId to 400
> ---
>
> Key: YARN-4746
> URL: https://issues.apache.org/jira/browse/YARN-4746
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: 0001-YARN-4746.patch, 0002-YARN-4746.patch, 
> 0003-YARN-4746.patch, 0003-YARN-4746.patch, 0004-YARN-4746.patch, 
> 0005-YARN-4746.patch
>
>
> I'm seeing somewhere in the WS API tests of mine an error with exception 
> conversion of  a bad app ID sent in as an argument to a GET. I know it's in 
> ATS, but a scan of the core RM web services implies a same problem
> {{WebServices.parseApplicationId()}} uses {{ConverterUtils.toApplicationId}} 
> to convert an argument; this throws IllegalArgumentException, which is then 
> handled somewhere by jetty as a 500 error.
> In fact, it's a bad argument, which should be handled by returning a 400. 
> This can be done by catching the raised argument and explicitly converting it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222729#comment-15222729
 ] 

Hadoop QA commented on YARN-4857:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 56s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 6 new + 
263 unchanged - 8 fixed = 269 total (was 271) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 5s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 10s 
{color} | 

[jira] [Commented] (YARN-4913) Yarn logs should take a -out option to write to a directory

2016-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222728#comment-15222728
 ] 

Hadoop QA commented on YARN-4913:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 38s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 31s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 9 new + 
43 unchanged - 8 fixed = 52 total (was 51) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 48s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 38s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 36s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| 

[jira] [Commented] (YARN-4905) Improve Yarn log Command line option to show log metadata

2016-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222723#comment-15222723
 ] 

Hadoop QA commented on YARN-4905:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 8 new + 
64 unchanged - 1 fixed = 72 total (was 65) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 56s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 18s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 14s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 37s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 165m 15s {color} 
| 

[jira] [Commented] (YARN-3215) Respect labels in CapacityScheduler when computing headroom

2016-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222719#comment-15222719
 ] 

Hadoop QA commented on YARN-3215:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 0 new + 211 unchanged - 4 fixed = 211 total (was 215) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 15s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 2s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 137m 22s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222668#comment-15222668
 ] 

Kai Sasaki commented on YARN-4857:
--

Thank you so much [~vvasudev], [~leftnoteasy] for checking the history of 
commits. I fixed the checkstyle. 

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Attachments: YARN-4857.01.patch, YARN-4857.02.patch, 
> YARN-4857.03.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-4857:
-
Attachment: YARN-4857.03.patch

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Attachments: YARN-4857.01.patch, YARN-4857.02.patch, 
> YARN-4857.03.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4514) [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS addresses

2016-04-01 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-4514:

Assignee: Sunil G  (was: Naganarasimha G R)

> [YARN-3368] Cleanup hardcoded configurations, such as RM/ATS addresses
> --
>
> Key: YARN-4514
> URL: https://issues.apache.org/jira/browse/YARN-4514
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
>
> We have several configurations are hard-coded, for example, RM/ATS addresses, 
> we should make them configurable. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3215) Respect labels in CapacityScheduler when computing headroom

2016-04-01 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R updated YARN-3215:

Attachment: YARN-3215.v2.003.patch

Hi [~wangda],
Have rebased the patch can you please review ?

> Respect labels in CapacityScheduler when computing headroom
> ---
>
> Key: YARN-3215
> URL: https://issues.apache.org/jira/browse/YARN-3215
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Attachments: YARN-3215.v1.001.patch, YARN-3215.v2.001.patch, 
> YARN-3215.v2.002.patch, YARN-3215.v2.003.patch
>
>
> In existing CapacityScheduler, when computing headroom of an application, it 
> will only consider "non-labeled" nodes of this application.
> But it is possible the application is asking for labeled resources, so 
> headroom-by-label (like 5G resource available under node-label=red) is 
> required to get better resource allocation and avoid deadlocks such as 
> MAPREDUCE-5928.
> This JIRA could involve both API changes (such as adding a 
> label-to-available-resource map in AllocateResponse) and also internal 
> changes in CapacityScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4913) Yarn logs should take a -out option to write to a directory

2016-04-01 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4913:

Attachment: YARN-4913.1.patch

> Yarn logs should take a -out option to write to a directory
> ---
>
> Key: YARN-4913
> URL: https://issues.apache.org/jira/browse/YARN-4913
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4913.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4913) Yarn logs should take a -out option to write to a directory

2016-04-01 Thread Xuan Gong (JIRA)
Xuan Gong created YARN-4913:
---

 Summary: Yarn logs should take a -out option to write to a 
directory
 Key: YARN-4913
 URL: https://issues.apache.org/jira/browse/YARN-4913
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Xuan Gong
Assignee: Xuan Gong






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222631#comment-15222631
 ] 

Hadoop QA commented on YARN-4807:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 22 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 11s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m 21s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 80m 55s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestContainerResourceUsage |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | 

[jira] [Commented] (YARN-4893) Fix some intermittent test failures in TestRMAdminService

2016-04-01 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222602#comment-15222602
 ] 

Kuhu Shukla commented on YARN-4893:
---

I found the answer to my question that I posted on YARN-998.

+1 lgtm (non-binding)

Jenkins build failed for this patch 
[https://builds.apache.org/job/PreCommit-YARN-Build/10927/]

> Fix some intermittent test failures in TestRMAdminService
> -
>
> Key: YARN-4893
> URL: https://issues.apache.org/jira/browse/YARN-4893
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: YARN-4893-002.patch, YARN-4893-003.patch, YARN-4893.patch
>
>
> As discussion in YARN-998, we need to add rm.drainEvents() after 
> rm.registerNode() or some of test could get failed intermittently. Also, we 
> can consider to add rm.drainEvents() within rm.registerNode() that could be 
> more convenient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-1297) Miscellaneous Fair Scheduler speedups

2016-04-01 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-1297:
---
Attachment: YARN-1297.005.patch

> Miscellaneous Fair Scheduler speedups
> -
>
> Key: YARN-1297
> URL: https://issues.apache.org/jira/browse/YARN-1297
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Sandy Ryza
>Assignee: Yufei Gu
> Attachments: YARN-1297-1.patch, YARN-1297-2.patch, 
> YARN-1297.005.patch, YARN-1297.3.patch, YARN-1297.4.patch, YARN-1297.4.patch, 
> YARN-1297.patch, YARN-1297.patch
>
>
> I ran the Fair Scheduler's core scheduling loop through a profiler tool and 
> identified a bunch of minimally invasive changes that can shave off a few 
> milliseconds.
> The main one is demoting a couple INFO log messages to DEBUG, which brought 
> my benchmark down from 16000 ms to 6000.
> A few others (which had way less of an impact) were
> * Most of the time in comparisons was being spent in Math.signum.  I switched 
> this to direct ifs and elses and it halved the percent of time spent in 
> comparisons.
> * I removed some unnecessary instantiations of Resource objects
> * I made it so that queues' usage wasn't calculated from the applications up 
> each time getResourceUsage was called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-1297) Miscellaneous Fair Scheduler speedups

2016-04-01 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222599#comment-15222599
 ] 

Yufei Gu commented on YARN-1297:


I upload a new patch. Since the head has been go forward, I manually apply 
these changes. Besides, for {{changeContainerResource}} in class 
{{SchedulerNode}}. I changed the {{LOG.info}} as others. Would you please have 
a look, [~ka...@cloudera.com]? Thanks.

> Miscellaneous Fair Scheduler speedups
> -
>
> Key: YARN-1297
> URL: https://issues.apache.org/jira/browse/YARN-1297
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Sandy Ryza
>Assignee: Yufei Gu
> Attachments: YARN-1297-1.patch, YARN-1297-2.patch, YARN-1297.3.patch, 
> YARN-1297.4.patch, YARN-1297.4.patch, YARN-1297.patch, YARN-1297.patch
>
>
> I ran the Fair Scheduler's core scheduling loop through a profiler tool and 
> identified a bunch of minimally invasive changes that can shave off a few 
> milliseconds.
> The main one is demoting a couple INFO log messages to DEBUG, which brought 
> my benchmark down from 16000 ms to 6000.
> A few others (which had way less of an impact) were
> * Most of the time in comparisons was being spent in Math.signum.  I switched 
> this to direct ifs and elses and it halved the percent of time spent in 
> comparisons.
> * I removed some unnecessary instantiations of Resource objects
> * I made it so that queues' usage wasn't calculated from the applications up 
> each time getResourceUsage was called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4911) Bad placement policy in FairScheduler causes the RM to crash

2016-04-01 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4911:
---
Component/s: (was: yarn)
 fairscheduler

> Bad placement policy in FairScheduler causes the RM to crash
> 
>
> Key: YARN-4911
> URL: https://issues.apache.org/jira/browse/YARN-4911
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>
> When you have a fair-scheduler.xml with the rule:
>   
> 
>   
> and the queue okay1 doesn't exist, the following exception occurs in the RM:
> 2016-04-01 16:56:33,383 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
> handling event type APP_ADDED to the scheduler
> java.lang.IllegalStateException: Should have applied a rule before reaching 
> here
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueuePlacementPolicy.assignAppToQueue(QueuePlacementPolicy.java:173)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.assignToQueue(FairScheduler.java:728)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.addApplication(FairScheduler.java:634)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1224)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:691)
> at java.lang.Thread.run(Thread.java:745)
> which causes the RM to crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-998) Keep NM resource updated through dynamic resource config for RM/NM restart

2016-04-01 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222587#comment-15222587
 ] 

Kuhu Shukla commented on YARN-998:
--

bq. shall we consider to move it inside of rm.registerNode()?
bq. that's nice idea . we can do I think..I'm +1 on that.

I was of the understanding that {{drainEvents}} would also address cases when 
nodes are refreshed, shutdown and basically get any event in the dispatcher 
queue out. Wouldn't moving it under registerNode alone break that? Please let 
me know what am I missing here. Thanks!

> Keep NM resource updated through dynamic resource config for RM/NM restart
> --
>
> Key: YARN-998
> URL: https://issues.apache.org/jira/browse/YARN-998
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, nodemanager, scheduler
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 2.8.0
>
> Attachments: YARN-998-sample.patch, YARN-998-v1.patch, 
> YARN-998-v2.1.patch, YARN-998-v2.patch, YARN-998-v3.patch
>
>
> When NM is restarted by plan or from a failure, previous dynamic resource 
> setting should be kept for consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4878) Expose scheduling policy and max running apps over JMX for Yarn queues

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222581#comment-15222581
 ] 

Karthik Kambatla commented on YARN-4878:


Thanks for adding these, Yufei. Comments on the patch:
# FSQueue constructor: instead of creating a new AllocationConfiguration, use 
{{scheduler.allocConf}}. You probably don't need another local variable either. 
# Why do we need to set the metrics in both QueueManager and FSQueue? 

> Expose scheduling policy and max running apps over JMX for Yarn queues
> --
>
> Key: YARN-4878
> URL: https://issues.apache.org/jira/browse/YARN-4878
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-4878.001.patch
>
>
> There are two things that are not currently visible over JMX: the current 
> scheduling policy for a queue, and the number of max running apps. It would 
> be great if these could be exposed over JMX as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-998) Keep NM resource updated through dynamic resource config for RM/NM restart

2016-04-01 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222577#comment-15222577
 ] 

Kuhu Shukla commented on YARN-998:
--

Verified that adding rm.drainEvents() solves the issue.

> Keep NM resource updated through dynamic resource config for RM/NM restart
> --
>
> Key: YARN-998
> URL: https://issues.apache.org/jira/browse/YARN-998
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: graceful, nodemanager, scheduler
>Reporter: Junping Du
>Assignee: Junping Du
> Fix For: 2.8.0
>
> Attachments: YARN-998-sample.patch, YARN-998-v1.patch, 
> YARN-998-v2.1.patch, YARN-998-v2.patch, YARN-998-v3.patch
>
>
> When NM is restarted by plan or from a failure, previous dynamic resource 
> setting should be kept for consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-4912) TestRMAdminService#testResourcePersistentForNMRegistrationWithNewResource fails consistently

2016-04-01 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla resolved YARN-4912.
---
Resolution: Duplicate

Found the JIRA that is tracking the failure. Closing this as duplicate.

> TestRMAdminService#testResourcePersistentForNMRegistrationWithNewResource 
> fails consistently
> 
>
> Key: YARN-4912
> URL: https://issues.apache.org/jira/browse/YARN-4912
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kuhu Shukla
>
> {code}
> Running org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
> Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.971 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
> testResourcePersistentForNMRegistrationWithNewResource(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
>   Time elapsed: 0.389 sec  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<> but 
> was:<>
>   at org.junit.Assert.assertEquals(Assert.java:115)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testResourcePersistentForNMRegistrationWithNewResource(TestRMAdminService.java:311)
> {code}
> CC: [~djp].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4912) TestRMAdminService#testResourcePersistentForNMRegistrationWithNewResource fails consistently

2016-04-01 Thread Kuhu Shukla (JIRA)
Kuhu Shukla created YARN-4912:
-

 Summary: 
TestRMAdminService#testResourcePersistentForNMRegistrationWithNewResource fails 
consistently
 Key: YARN-4912
 URL: https://issues.apache.org/jira/browse/YARN-4912
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Kuhu Shukla


{code}
Running org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 9.971 sec <<< 
FAILURE! - in org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService
testResourcePersistentForNMRegistrationWithNewResource(org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService)
  Time elapsed: 0.389 sec  <<< FAILURE!
org.junit.ComparisonFailure: expected:<> but 
was:<>
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.yarn.server.resourcemanager.TestRMAdminService.testResourcePersistentForNMRegistrationWithNewResource(TestRMAdminService.java:311)
{code}

CC: [~djp].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4311) Removing nodes from include and exclude lists will not remove them from decommissioned nodes list

2016-04-01 Thread Kuhu Shukla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222566#comment-15222566
 ] 

Kuhu Shukla commented on YARN-4311:
---

New test failure of 
{{TestRMAdminService#testResourcePersistentForNMRegistrationWithNewResource}} 
is failing with or without the patch, opened 
[YARN-4912|https://issues.apache.org/jira/browse/YARN-4912]. Other test 
failures are known. @Requesting [~jlowe] for comments/review. Thanks a lot!!

> Removing nodes from include and exclude lists will not remove them from 
> decommissioned nodes list
> -
>
> Key: YARN-4311
> URL: https://issues.apache.org/jira/browse/YARN-4311
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-4311-v1.patch, YARN-4311-v10.patch, 
> YARN-4311-v11.patch, YARN-4311-v11.patch, YARN-4311-v12.patch, 
> YARN-4311-v13.patch, YARN-4311-v13.patch, YARN-4311-v14.patch, 
> YARN-4311-v2.patch, YARN-4311-v3.patch, YARN-4311-v4.patch, 
> YARN-4311-v5.patch, YARN-4311-v6.patch, YARN-4311-v7.patch, 
> YARN-4311-v8.patch, YARN-4311-v9.patch
>
>
> In order to fully forget about a node, removing the node from include and 
> exclude list is not sufficient. The RM lists it under Decomm-ed nodes. The 
> tricky part that [~jlowe] pointed out was the case when include lists are not 
> used, in that case we don't want the nodes to fall off if they are not active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4850) test-fair-scheduler.xml isn't valid xml

2016-04-01 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4850:
---
Fix Version/s: (was: 3.0.0)
   2.7.3
   2.8.0

> test-fair-scheduler.xml isn't valid xml
> ---
>
> Key: YARN-4850
> URL: https://issues.apache.org/jira/browse/YARN-4850
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Affects Versions: 2.7.2
>Reporter: Allen Wittenauer
>Assignee: Yufei Gu
>Priority: Blocker
> Fix For: 2.8.0, 2.7.3
>
> Attachments: YARN-4850.001.patch, YARN-4850.002.patch
>
>
> The ASF license should be in an actual XML-formatted comment inside the XML 
> block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4850) test-fair-scheduler.xml isn't valid xml

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222564#comment-15222564
 ] 

Karthik Kambatla commented on YARN-4850:


Just pulled it into branch-2, branch-2.8 and branch-2.7. 

> test-fair-scheduler.xml isn't valid xml
> ---
>
> Key: YARN-4850
> URL: https://issues.apache.org/jira/browse/YARN-4850
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Affects Versions: 2.7.2
>Reporter: Allen Wittenauer
>Assignee: Yufei Gu
>Priority: Blocker
> Fix For: 2.8.0, 2.7.3
>
> Attachments: YARN-4850.001.patch, YARN-4850.002.patch
>
>
> The ASF license should be in an actual XML-formatted comment inside the XML 
> block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4911) Bad placement policy in FairScheduler causes the RM to crash

2016-04-01 Thread Ray Chiang (JIRA)
Ray Chiang created YARN-4911:


 Summary: Bad placement policy in FairScheduler causes the RM to 
crash
 Key: YARN-4911
 URL: https://issues.apache.org/jira/browse/YARN-4911
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Ray Chiang
Assignee: Ray Chiang


When you have a fair-scheduler.xml with the rule:

  

  

and the queue okay1 doesn't exist, the following exception occurs in the RM:

2016-04-01 16:56:33,383 FATAL 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error in 
handling event type APP_ADDED to the scheduler
java.lang.IllegalStateException: Should have applied a rule before reaching here
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.QueuePlacementPolicy.assignAppToQueue(QueuePlacementPolicy.java:173)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.assignToQueue(FairScheduler.java:728)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.addApplication(FairScheduler.java:634)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1224)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:112)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher$EventProcessor.run(ResourceManager.java:691)
at java.lang.Thread.run(Thread.java:745)

which causes the RM to crash.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4850) test-fair-scheduler.xml isn't valid xml

2016-04-01 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4850:
---
Affects Version/s: (was: 3.0.0)
   2.7.2

> test-fair-scheduler.xml isn't valid xml
> ---
>
> Key: YARN-4850
> URL: https://issues.apache.org/jira/browse/YARN-4850
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Affects Versions: 2.7.2
>Reporter: Allen Wittenauer
>Assignee: Yufei Gu
>Priority: Blocker
> Fix For: 2.8.0, 2.7.3
>
> Attachments: YARN-4850.001.patch, YARN-4850.002.patch
>
>
> The ASF license should be in an actual XML-formatted comment inside the XML 
> block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4850) test-fair-scheduler.xml isn't valid xml

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222561#comment-15222561
 ] 

Karthik Kambatla commented on YARN-4850:


I think we should pull this into branch-2 etc. so we meet the licensing 
requirements going forward. /cc [~ajisakaa], [~vinodkv]

> test-fair-scheduler.xml isn't valid xml
> ---
>
> Key: YARN-4850
> URL: https://issues.apache.org/jira/browse/YARN-4850
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Yufei Gu
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: YARN-4850.001.patch, YARN-4850.002.patch
>
>
> The ASF license should be in an actual XML-formatted comment inside the XML 
> block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4390) Consider container request size during CS preemption

2016-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222552#comment-15222552
 ] 

Hadoop QA commented on YARN-4390:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 30 new + 506 unchanged - 15 fixed = 536 total (was 521) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 20s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 8s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 153m 22s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_74 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestRMAdminService |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-04-01 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222546#comment-15222546
 ] 

Yufei Gu commented on YARN-4807:


Hi [~ka...@cloudera.com], I uploaded the seventh path. Please have a look. 
Thanks.

As we discussed, I remove the minimum waiting time in wait for attempt states. 
We will solve it in the followup JIRA if any test case fails because of it. 

> MockAM#waitForState sleep duration is too long
> --
>
> Key: YARN-4807
> URL: https://issues.apache.org/jira/browse/YARN-4807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>  Labels: newbie
> Attachments: YARN-4807.001.patch, YARN-4807.002.patch, 
> YARN-4807.003.patch, YARN-4807.004.patch, YARN-4807.005.patch, 
> YARN-4807.006.patch, YARN-4807.007.patch
>
>
> MockAM#waitForState sleep duration (500 ms) is too long. Also, there is 
> significant duplication with MockRM#waitForState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-04-01 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-4807:
---
Attachment: YARN-4807.007.patch

> MockAM#waitForState sleep duration is too long
> --
>
> Key: YARN-4807
> URL: https://issues.apache.org/jira/browse/YARN-4807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>  Labels: newbie
> Attachments: YARN-4807.001.patch, YARN-4807.002.patch, 
> YARN-4807.003.patch, YARN-4807.004.patch, YARN-4807.005.patch, 
> YARN-4807.006.patch, YARN-4807.007.patch
>
>
> MockAM#waitForState sleep duration (500 ms) is too long. Also, there is 
> significant duplication with MockRM#waitForState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4657) Javadoc comment is broken for Resources.multiplyByAndAddTo()

2016-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222544#comment-15222544
 ] 

Hudson commented on YARN-4657:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9543 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9543/])
YARN-4657. Javadoc comment is broken for Resources.multiplyByAndAddTo(). 
(kasha: rev 81d04cae41182808ace5d86cdac7e4d71871eb1e)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java


> Javadoc comment is broken for Resources.multiplyByAndAddTo()
> 
>
> Key: YARN-4657
> URL: https://issues.apache.org/jira/browse/YARN-4657
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Fix For: 2.9.0
>
> Attachments: YARN-4657.001.patch
>
>
> The comment is
> {code}
>   /**
>* Multiply @param rhs by @param by, and add the result to @param lhs
>* without creating any new {@link Resource} object
>*/
> {code}
> The {{@param}} tag can't be used that way.  {{\{@code rhs\}}} is the correct 
> thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4895) Add subtractFrom method to ResourceUtilization class

2016-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222543#comment-15222543
 ] 

Hudson commented on YARN-4895:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9543 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9543/])
Missing file for YARN-4895. (arun suresh: rev 
5686caa9fcb59759c9286385575f31e407a97c16)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/records/TestResourceUtilization.java


> Add subtractFrom method to ResourceUtilization class
> 
>
> Key: YARN-4895
> URL: https://issues.apache.org/jira/browse/YARN-4895
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4895.001.patch, YARN-4895.002.patch
>
>
> In ResourceUtilization class, there is already an addTo method. 
> For completeness, here we are adding the dual subtractFrom method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4657) Javadoc comment is broken for Resources.multiplyByAndAddTo()

2016-04-01 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-4657:
---
Summary: Javadoc comment is broken for Resources.multiplyByAndAddTo()  
(was: Javadoc comment is broken for 
o.a.h.yarn.util.resource.Resources.multiplyByAndAddTo())

> Javadoc comment is broken for Resources.multiplyByAndAddTo()
> 
>
> Key: YARN-4657
> URL: https://issues.apache.org/jira/browse/YARN-4657
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-4657.001.patch
>
>
> The comment is
> {code}
>   /**
>* Multiply @param rhs by @param by, and add the result to @param lhs
>* without creating any new {@link Resource} object
>*/
> {code}
> The {{@param}} tag can't be used that way.  {{\{@code rhs\}}} is the correct 
> thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4657) Javadoc comment is broken for o.a.h.yarn.util.resource.Resources.multiplyByAndAddTo()

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222515#comment-15222515
 ] 

Karthik Kambatla commented on YARN-4657:


+1, checking this in. 

> Javadoc comment is broken for 
> o.a.h.yarn.util.resource.Resources.multiplyByAndAddTo()
> -
>
> Key: YARN-4657
> URL: https://issues.apache.org/jira/browse/YARN-4657
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Trivial
> Attachments: YARN-4657.001.patch
>
>
> The comment is
> {code}
>   /**
>* Multiply @param rhs by @param by, and add the result to @param lhs
>* without creating any new {@link Resource} object
>*/
> {code}
> The {{@param}} tag can't be used that way.  {{\{@code rhs\}}} is the correct 
> thing to do.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4784) fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222509#comment-15222509
 ] 

Karthik Kambatla commented on YARN-4784:


Thanks for working on this, Yufei. Can you comment on any testing done? If it 
is not too much trouble, it would be nice to add a small test that tries to 
load an allocations file with default as FIFO and verify it throws an 
AllocationConfigurationException. 

> fair scheduler: defaultQueueSchedulingPolicy should not accept fifo as a value
> --
>
> Key: YARN-4784
> URL: https://issues.apache.org/jira/browse/YARN-4784
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.7.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-4784.001.patch
>
>
> The configure item defaultQueueSchedulingPolicy should not accept fifo as a 
> value since it is an invalid value for non-leaf queues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222457#comment-15222457
 ] 

Karthik Kambatla commented on YARN-4457:


And, I never looked closely at the previous patch. :) 

Will take a closer look at the new one. Would it be too bad to recommend 
separate JIRAs/patches for Yarn and MR? 

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4895) Add subtractFrom method to ResourceUtilization class

2016-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222428#comment-15222428
 ] 

Hudson commented on YARN-4895:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9542 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9542/])
YARN-4895. Add subtractFrom method to ResourceUtilization class. (arun suresh: 
rev 82621e38a0445832998bc00693279e23a98605c1)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceUtilization.java


> Add subtractFrom method to ResourceUtilization class
> 
>
> Key: YARN-4895
> URL: https://issues.apache.org/jira/browse/YARN-4895
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4895.001.patch, YARN-4895.002.patch
>
>
> In ResourceUtilization class, there is already an addTo method. 
> For completeness, here we are adding the dual subtractFrom method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2883) Queuing of container requests in the NM

2016-04-01 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222351#comment-15222351
 ] 

Konstantinos Karanasos commented on YARN-2883:
--

Thinking more about it, it might be a good option to move the running/queued 
containers to the Manager. The Monitor will continue being the one that knows 
about the resources, but I hope we can get those info from (synchronous) calls 
without the need for events.
By doing so, I want to make sure we don't complicate more YARN-1011.
So, let me go ahead and do the refactoring to see how it looks and I'll get 
back to you.

> Queuing of container requests in the NM
> ---
>
> Key: YARN-2883
> URL: https://issues.apache.org/jira/browse/YARN-2883
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2883-trunk.004.patch, YARN-2883-trunk.005.patch, 
> YARN-2883-trunk.006.patch, YARN-2883-trunk.007.patch, 
> YARN-2883-trunk.008.patch, YARN-2883-yarn-2877.001.patch, 
> YARN-2883-yarn-2877.002.patch, YARN-2883-yarn-2877.003.patch, 
> YARN-2883-yarn-2877.004.patch
>
>
> We propose to add a queue in each NM, where queueable container requests can 
> be held.
> Based on the available resources in the node and the containers in the queue, 
> the NM will decide when to allow the execution of a queued container.
> In order to ensure the instantaneous start of a guaranteed-start container, 
> the NM may decide to pre-empt/kill running queueable containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4390) Consider container request size during CS preemption

2016-04-01 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4390:
-
Attachment: YARN-4390.3.patch

Attached ver.3 patch.

> Consider container request size during CS preemption
> 
>
> Key: YARN-4390
> URL: https://issues.apache.org/jira/browse/YARN-4390
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Affects Versions: 3.0.0, 2.8.0, 2.7.3
>Reporter: Eric Payne
>Assignee: Wangda Tan
> Attachments: YARN-4390-design.1.pdf, YARN-4390-test-results.pdf, 
> YARN-4390.1.patch, YARN-4390.2.patch, YARN-4390.3.patch
>
>
> There are multiple reasons why preemption could unnecessarily preempt 
> containers. One is that an app could be requesting a large container (say 
> 8-GB), and the preemption monitor could conceivably preempt multiple 
> containers (say 8, 1-GB containers) in order to fill the large container 
> request. These smaller containers would then be rejected by the requesting AM 
> and potentially given right back to the preempted app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4746) yarn web services should convert parse failures of appId to 400

2016-04-01 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4746:
---
Attachment: 0005-YARN-4746.patch

[~Naganarasimha Garla]
Thnk you for reviewing patch.
Uploading new patch after handling [~varun_saxena] comments .

> yarn web services should convert parse failures of appId to 400
> ---
>
> Key: YARN-4746
> URL: https://issues.apache.org/jira/browse/YARN-4746
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: 0001-YARN-4746.patch, 0002-YARN-4746.patch, 
> 0003-YARN-4746.patch, 0003-YARN-4746.patch, 0004-YARN-4746.patch, 
> 0005-YARN-4746.patch
>
>
> I'm seeing somewhere in the WS API tests of mine an error with exception 
> conversion of  a bad app ID sent in as an argument to a GET. I know it's in 
> ATS, but a scan of the core RM web services implies a same problem
> {{WebServices.parseApplicationId()}} uses {{ConverterUtils.toApplicationId}} 
> to convert an argument; this throws IllegalArgumentException, which is then 
> handled somewhere by jetty as a 500 error.
> In fact, it's a bad argument, which should be handled by returning a 400. 
> This can be done by catching the raised argument and explicitly converting it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4849) [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add licenses.

2016-04-01 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4849:
-
Attachment: YARN-4849-YARN-3368.5.patch

> [YARN-3368] cleanup code base, integrate web UI related build to mvn, and add 
> licenses.
> ---
>
> Key: YARN-4849
> URL: https://issues.apache.org/jira/browse/YARN-4849
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4849-YARN-3368.1.patch, 
> YARN-4849-YARN-3368.2.patch, YARN-4849-YARN-3368.3.patch, 
> YARN-4849-YARN-3368.4.patch, YARN-4849-YARN-3368.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-04-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1586#comment-1586
 ] 

Daniel Templeton commented on YARN-4457:


Looks like the latest patch fails to include the previous patch.  I'll go merge 
them and join those two lines.

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4457) Cleanup unchecked types for EventHandler

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1553#comment-1553
 ] 

Karthik Kambatla commented on YARN-4457:


The first of the two changes can all fit on one line. 

> Cleanup unchecked types for EventHandler
> 
>
> Key: YARN-4457
> URL: https://issues.apache.org/jira/browse/YARN-4457
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-4457.001.patch, YARN-4457.002.patch, 
> YARN-4457.003.patch
>
>
> The EventHandler class is often used in an untyped context resulting in a 
> bunch of warnings about unchecked usage.  The culprit is the 
> {{Dispatcher.getHandler()}} method.  Fixing the typing on the method to 
> return {{EventHandler}} instead of {{EventHandler}} clears up the 
> errors and doesn't not introduce any incompatible changes.  In the case that 
> some code does:
> {code}
> EventHandler h = dispatcher.getHandler();
> {code}
> it will still work and will issue a compiler warning about raw types.  There 
> are, however, no instances of this issue in the current source base.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4311) Removing nodes from include and exclude lists will not remove them from decommissioned nodes list

2016-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1526#comment-1526
 ] 

Hadoop QA commented on YARN-4311:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 52s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_74. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 18s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 51s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.8.0_74. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.7.0_95. 
{color} |

[jira] [Commented] (YARN-2883) Queuing of container requests in the NM

2016-04-01 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222196#comment-15222196
 ] 

Arun Suresh commented on YARN-2883:
---

I think we have a tradeoff here :
# On one hand, I agree with [~kkaranasos] that {{ContainersMonitor}} has more 
insight into the available/utilized resource (as well, in the future, possibly 
the knowledge of the queue length and wait-time).. not to mention, the fact 
that the {{resourceCalculaterPlugin}} etc are all present in the Monitor, and 
therefore, as with the current patch, the monitor should decide when to start / 
stop a container.
# But on the other hand, I see [~kasha]'s point.. Moving the queues to the 
manager means, there is no back and forth, via events, between the manager and 
the monitor. We would of-course maybe need to expose methods in the monitor to 
return available / utilized resources which can be used by the Manager to 
decide when to queue / start / kill containers.

Going thru the patch again, I feel maybe Option 2 might be easier to implement 
and additionally would have minimal (on first-pass, I see only the need to 
notify the Manager of a finished container) impact on the existing 
ContainersMonitor.

Thoughts ?

> Queuing of container requests in the NM
> ---
>
> Key: YARN-2883
> URL: https://issues.apache.org/jira/browse/YARN-2883
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2883-trunk.004.patch, YARN-2883-trunk.005.patch, 
> YARN-2883-trunk.006.patch, YARN-2883-trunk.007.patch, 
> YARN-2883-trunk.008.patch, YARN-2883-yarn-2877.001.patch, 
> YARN-2883-yarn-2877.002.patch, YARN-2883-yarn-2877.003.patch, 
> YARN-2883-yarn-2877.004.patch
>
>
> We propose to add a queue in each NM, where queueable container requests can 
> be held.
> Based on the available resources in the node and the containers in the queue, 
> the NM will decide when to allow the execution of a queued container.
> In order to ensure the instantaneous start of a guaranteed-start container, 
> the NM may decide to pre-empt/kill running queueable containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated YARN-4857:

Fix Version/s: (was: 2.9.0)

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Attachments: YARN-4857.01.patch, YARN-4857.02.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-3773) hadoop-yarn-server-nodemanager needs cross-platform equivalents of Linux /sbin/tc

2016-04-01 Thread Alan Burlison (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Burlison updated YARN-3773:

Summary: hadoop-yarn-server-nodemanager needs cross-platform equivalents of 
Linux /sbin/tc  (was: hadoop-yarn-server-nodemanager's use of Linux /sbin/tc is 
non-portable)

> hadoop-yarn-server-nodemanager needs cross-platform equivalents of Linux 
> /sbin/tc
> -
>
> Key: YARN-3773
> URL: https://issues.apache.org/jira/browse/YARN-3773
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
> Environment: BSD OSX Solaris Windows Linux
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
>  makes use of the Linux-only executable /sbin/tc 
> (http://lartc.org/manpages/tc.txt)  but there is no corresponding 
> functionality for non-Linux platforms. The code in question also seems to try 
> to execute tc even on platforms where it will never exist.
> Other platforms provide similar functionality, e.g. Solaris has an extensive 
> range of network management features 
> (http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-095-s11-app-traffic-525038.html).
>  Work is needed to abstract the network management features of Yarn so that 
> the same facilities for network management can be provided on all platforms 
> that provide the requisite functionality,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-04-01 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222045#comment-15222045
 ] 

Yufei Gu commented on YARN-4807:


BTW, I create YARN-4907 as a followup JIRA for inconsistency. 

> MockAM#waitForState sleep duration is too long
> --
>
> Key: YARN-4807
> URL: https://issues.apache.org/jira/browse/YARN-4807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>  Labels: newbie
> Attachments: YARN-4807.001.patch, YARN-4807.002.patch, 
> YARN-4807.003.patch, YARN-4807.004.patch, YARN-4807.005.patch, 
> YARN-4807.006.patch
>
>
> MockAM#waitForState sleep duration (500 ms) is too long. Also, there is 
> significant duplication with MockRM#waitForState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-04-01 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222023#comment-15222023
 ] 

Yufei Gu commented on YARN-4807:


Thanks [~ka...@cloudera.com] for detailed review. 
1. Yes. Besides, I also use "private final boolean useNullRMNodeLabelsManager" 
instead of "final private boolean useNullRMNodeLabelsManager". 
2. Modified as suggested.
3. Modified as suggested. For the minimum waiting time, I mentions it as one 
inconsistency in previous comment. Thanks for the explain.
4 and 5. I mentioned them as one consistency as well. It's not too work, but we 
just need to decide to do it here or next JIRA.
6. Nice suggestion. But to move {{int loop = 0}} to the first for loop will 
make second for loop have no init value for {{loop}}. 

{{loop}} to {{retries}}, yeah, good idea.

> MockAM#waitForState sleep duration is too long
> --
>
> Key: YARN-4807
> URL: https://issues.apache.org/jira/browse/YARN-4807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>  Labels: newbie
> Attachments: YARN-4807.001.patch, YARN-4807.002.patch, 
> YARN-4807.003.patch, YARN-4807.004.patch, YARN-4807.005.patch, 
> YARN-4807.006.patch
>
>
> MockAM#waitForState sleep duration (500 ms) is too long. Also, there is 
> significant duplication with MockRM#waitForState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222000#comment-15222000
 ] 

Wangda Tan edited comment on YARN-4857 at 4/1/16 5:18 PM:
--

Thanks [~kaisasak],

[~vvasudev],
They're CS-specific configs. Since they were in yarn-site.xml before, and we 
documented their existences in yarn-site.xml in a couple of docs/blogs. We'd 
better to move them back to yarn-site for less confusing.

After YARN-4822, we support reading preemption configs from both of 
yarn-site.xml and capacity-scheduler.xml. But we can document them in 
yarn-site.xml.

+[~sunilg].


was (Author: leftnoteasy):
Thanks [~kaisasak],

[~vvasudev],
They're CS-specific configs. Since they were in yarn-site.xml before, we can 
move them back to yarn-site for less confusing.

After YARN-4822, we support reading preemption configs from both of 
yarn-site.xml and capacity-scheduler.xml. But we can document them in 
yarn-site.xml.

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch, YARN-4857.02.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15222000#comment-15222000
 ] 

Wangda Tan commented on YARN-4857:
--

Thanks [~kaisasak],

[~vvasudev],
They're CS-specific configs. Since they were in yarn-site.xml before, we can 
move them back to yarn-site for less confusing.

After YARN-4822, we support reading preemption configs from both of 
yarn-site.xml and capacity-scheduler.xml. But we can document them in 
yarn-site.xml.

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch, YARN-4857.02.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3773) hadoop-yarn-server-nodemanager's use of Linux /sbin/tc is non-portable

2016-04-01 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221973#comment-15221973
 ] 

Sidharta Seethana commented on YARN-3773:
-

Ok, thanks for the clarification. Could you please update the description 
accordingly? Folks running into this JIRA might get an incorrect impression of 
how the container-executor is used. 

> hadoop-yarn-server-nodemanager's use of Linux /sbin/tc is non-portable
> --
>
> Key: YARN-3773
> URL: https://issues.apache.org/jira/browse/YARN-3773
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
> Environment: BSD OSX Solaris Windows Linux
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
>  makes use of the Linux-only executable /sbin/tc 
> (http://lartc.org/manpages/tc.txt)  but there is no corresponding 
> functionality for non-Linux platforms. The code in question also seems to try 
> to execute tc even on platforms where it will never exist.
> Other platforms provide similar functionality, e.g. Solaris has an extensive 
> range of network management features 
> (http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-095-s11-app-traffic-525038.html).
>  Work is needed to abstract the network management features of Yarn so that 
> the same facilities for network management can be provided on all platforms 
> that provide the requisite functionality,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4311) Removing nodes from include and exclude lists will not remove them from decommissioned nodes list

2016-04-01 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-4311:
--
Attachment: YARN-4311-v14.patch

Same patch, trying to get PreCommit to pick it up.

> Removing nodes from include and exclude lists will not remove them from 
> decommissioned nodes list
> -
>
> Key: YARN-4311
> URL: https://issues.apache.org/jira/browse/YARN-4311
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.6.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-4311-v1.patch, YARN-4311-v10.patch, 
> YARN-4311-v11.patch, YARN-4311-v11.patch, YARN-4311-v12.patch, 
> YARN-4311-v13.patch, YARN-4311-v13.patch, YARN-4311-v14.patch, 
> YARN-4311-v2.patch, YARN-4311-v3.patch, YARN-4311-v4.patch, 
> YARN-4311-v5.patch, YARN-4311-v6.patch, YARN-4311-v7.patch, 
> YARN-4311-v8.patch, YARN-4311-v9.patch
>
>
> In order to fully forget about a node, removing the node from include and 
> exclude list is not sufficient. The RM lists it under Decomm-ed nodes. The 
> tricky part that [~jlowe] pointed out was the case when include lists are not 
> used, in that case we don't want the nodes to fall off if they are not active.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4795) ContainerMetrics drops records

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221903#comment-15221903
 ] 

Karthik Kambatla commented on YARN-4795:


Thanks for reporting and working on this, Daniel. 

TestContainerMetrics part of the patch doesn't apply any more. One comment on 
the patch: is it possible that flushOnPeriod and finished are both true? If 
yes, looks like we would schedule another task unnecessarily. How about 
updating the check from {{if (flushOnPeriod)}} to {{if (flushOnPeriod && 
!finished)}}? That mimics the previous version better too. 

> ContainerMetrics drops records
> --
>
> Key: YARN-4795
> URL: https://issues.apache.org/jira/browse/YARN-4795
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-4795.001.patch
>
>
> The metrics2 system was implemented to deal with persistent sources.  
> {{ContainerMetrics}} is an ephemeral source, and so it causes problems.  
> Specifically, the {{ContainerMetrics}} only reports metrics once after the 
> container has been stopped.  This behavior is a problem because the metrics2 
> system can ask sources for reports that will be quietly dropped by the sinks 
> that care.  (It's a metrics2 feature, not a bug.)  If that final report is 
> silently dropped, it's lost, because the {{ContainerMetrics}} won't report 
> anything else ever anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2883) Queuing of container requests in the NM

2016-04-01 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221902#comment-15221902
 ] 

Konstantinos Karanasos commented on YARN-2883:
--

Let me make some clarifications to help, [~kasha].

In order to decide when to start a container from the queue, we need to know 
the available resources, and this is knowledge that the 
{{ContainersMonitorImpl}} only has. This is the reason I have added the queues 
to the Monitor. 
Moreover, for the NM to estimate and send its expected queue wait time to the 
RM (to eventually help with overcommitment or queuing from the RM to the NMs), 
it is much more convenient to have both running and queued containers at the 
same class.

On the other hand, I do agree that the {{ContainersMonitorImpl}} should have a 
more passive role. However, even at the moment, the Monitor is capable of 
killing containers (when they exceed their allotted resources), so its role is 
not that passive either. I kept the same logic by not allowing the Monitor to 
actually start or stop containers, but rather inform the 
{{ContainerManagerImpl}} to do so.

That said, if we were to refactor a big part of the NM code, we could make 
things even cleaner. Going further, this is what has been proposed in YARN-4597.

bq. Also, ContainersMonitorImpl will then have state that needs to be persisted 
for a work-preserving NM restart.
If I'm not wrong, that should not be an issue, because I am keeping the queued 
containers in the Context.

> Queuing of container requests in the NM
> ---
>
> Key: YARN-2883
> URL: https://issues.apache.org/jira/browse/YARN-2883
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2883-trunk.004.patch, YARN-2883-trunk.005.patch, 
> YARN-2883-trunk.006.patch, YARN-2883-trunk.007.patch, 
> YARN-2883-trunk.008.patch, YARN-2883-yarn-2877.001.patch, 
> YARN-2883-yarn-2877.002.patch, YARN-2883-yarn-2877.003.patch, 
> YARN-2883-yarn-2877.004.patch
>
>
> We propose to add a queue in each NM, where queueable container requests can 
> be held.
> Based on the available resources in the node and the containers in the queue, 
> the NM will decide when to allow the execution of a queued container.
> In order to ensure the instantaneous start of a guaranteed-start container, 
> the NM may decide to pre-empt/kill running queueable containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221887#comment-15221887
 ] 

Karthik Kambatla commented on YARN-4807:


One other nit: can we rename variables {{loop}} to {{retries}} for readability? 

> MockAM#waitForState sleep duration is too long
> --
>
> Key: YARN-4807
> URL: https://issues.apache.org/jira/browse/YARN-4807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>  Labels: newbie
> Attachments: YARN-4807.001.patch, YARN-4807.002.patch, 
> YARN-4807.003.patch, YARN-4807.004.patch, YARN-4807.005.patch, 
> YARN-4807.006.patch
>
>
> MockAM#waitForState sleep duration (500 ms) is too long. Also, there is 
> significant duplication with MockRM#waitForState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221876#comment-15221876
 ] 

Karthik Kambatla commented on YARN-4807:


Few comments, mostly nits:

MockRM
# Can we use "private static final" instead of static final private" for the 
new constants? 
# waitForState(ApplicationId appId, RMAppState finalState): 
## move {{int loop = 0}} to the for loop
## After the loop, just add an assert that verifies the "state" irrespective of 
why we exited the loop. 
# waitForState(RMAppAttempt attempt, RMAppAttemptState finalState, int 
timeoutMsecs)
## move {{int loop = 0}} to the for loop
## After the for loop, why are we sleeping for the remaining time? I would 
think this slows down tests significantly.
## just add an assert that verifies the "state" irrespective of why we exited 
the loop. 
# waitForContainerToComplete - do we not have any timeout on this? I recommend 
adding one. 
# Same goes for watiForNewAMToLaunchAndRegister
# waitForState(Collection nms, ContainerId containerId, 
RMContainerState containerState, int timeoutMsecs)
## move {{int loop = 0}} to the first for loop
## assertNotNull need not be guarded by if (container == null)
## Move the if check for (container == null) to one of the conditions for the 
following for.

> MockAM#waitForState sleep duration is too long
> --
>
> Key: YARN-4807
> URL: https://issues.apache.org/jira/browse/YARN-4807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>  Labels: newbie
> Attachments: YARN-4807.001.patch, YARN-4807.002.patch, 
> YARN-4807.003.patch, YARN-4807.004.patch, YARN-4807.005.patch, 
> YARN-4807.006.patch
>
>
> MockAM#waitForState sleep duration (500 ms) is too long. Also, there is 
> significant duplication with MockRM#waitForState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4910) Fix incomplete log info in ResourceLocalizationService

2016-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221874#comment-15221874
 ] 

Hadoop QA commented on YARN-4910:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_74 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 7s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_74. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 45s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 10s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12796541/YARN-4910.01.patch |
| JIRA Issue | YARN-4910 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4122ffa4c004 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-4274) NodeStatusUpdaterImpl should register to RM again after a non-fatal exception happen before

2016-04-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221871#comment-15221871
 ] 

Daniel Templeton commented on YARN-4274:


It looks to me like YARN-4288 should also have solved this JIRA.  [~djp], can 
you confirm?

> NodeStatusUpdaterImpl should register to RM again after a non-fatal exception 
> happen before
> ---
>
> Key: YARN-4274
> URL: https://issues.apache.org/jira/browse/YARN-4274
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Junping Du
>Assignee: Junping Du
>
> From YARN-3896, an non-fatal exception like response ID mismatch between NM 
> and RM (due to a race condition) will cause NM stop working. I think we 
> should make it more robust to tolerant a few failure in registering to RM 
> with retry a few times.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4807) MockAM#waitForState sleep duration is too long

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221855#comment-15221855
 ] 

Karthik Kambatla commented on YARN-4807:


Taking a closer look...

> MockAM#waitForState sleep duration is too long
> --
>
> Key: YARN-4807
> URL: https://issues.apache.org/jira/browse/YARN-4807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Yufei Gu
>  Labels: newbie
> Attachments: YARN-4807.001.patch, YARN-4807.002.patch, 
> YARN-4807.003.patch, YARN-4807.004.patch, YARN-4807.005.patch, 
> YARN-4807.006.patch
>
>
> MockAM#waitForState sleep duration (500 ms) is too long. Also, there is 
> significant duplication with MockRM#waitForState.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4895) Add subtractFrom method to ResourceUtilization class

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221851#comment-15221851
 ] 

Karthik Kambatla commented on YARN-4895:


+1

> Add subtractFrom method to ResourceUtilization class
> 
>
> Key: YARN-4895
> URL: https://issues.apache.org/jira/browse/YARN-4895
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4895.001.patch, YARN-4895.002.patch
>
>
> In ResourceUtilization class, there is already an addTo method. 
> For completeness, here we are adding the dual subtractFrom method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (YARN-2883) Queuing of container requests in the NM

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221849#comment-15221849
 ] 

Karthik Kambatla edited comment on YARN-2883 at 4/1/16 3:29 PM:


Discussed this briefly with [~kkaranasos]. Our opinions appear to differ on the 
scope of {{ContainersMonitorImpl}} - my opinion is this class should monitor 
only running containers and shouldn't be aware of containers that are not 
running yet or play part in decision making of when containers should start 
running. It appears [~kkaranasos] believes this class should also monitor when 
the non-running containers should start running. 

IMO, expanding the scope of {{ContainersMonitorImpl}} makes the code hard to 
follow as we have communication going both ways between 
{{ContainersMonitorImpl}} and {{ContainersManagerImpl}}. Also, 
{{ContainersMonitorImpl}} will then have state that needs to be persisted for a 
work-preserving NM restart. Would like to hear others' thoughts. 
[~chris.douglas], [~asuresh] - what do you guys think? /cc [~jlowe]


was (Author: kasha):
Discussed this briefly with [~kkaranasos]. Our opinions appear to differ on the 
scope of {{ContainersMonitorImpl}} - my opinion is this class should monitor 
only running containers and shouldn't be aware of containers that have not 
running yet or play part in decision making of when containers should start 
running. It appears [~kkaranasos] believes this class should also monitor when 
the non-running containers should start running. 

IMO, expanding the scope of {{ContainersMonitorImpl}} makes the code hard to 
follow as we have communication going both ways between 
{{ContainersMonitorImpl}} and {{ContainersManagerImpl}}. Also, 
{{ContainersMonitorImpl}} will then have state that needs to be persisted for a 
work-preserving NM restart. Would like to hear others' thoughts. 
[~chris.douglas], [~asuresh] - what do you guys think? /cc [~jlowe]

> Queuing of container requests in the NM
> ---
>
> Key: YARN-2883
> URL: https://issues.apache.org/jira/browse/YARN-2883
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2883-trunk.004.patch, YARN-2883-trunk.005.patch, 
> YARN-2883-trunk.006.patch, YARN-2883-trunk.007.patch, 
> YARN-2883-trunk.008.patch, YARN-2883-yarn-2877.001.patch, 
> YARN-2883-yarn-2877.002.patch, YARN-2883-yarn-2877.003.patch, 
> YARN-2883-yarn-2877.004.patch
>
>
> We propose to add a queue in each NM, where queueable container requests can 
> be held.
> Based on the available resources in the node and the containers in the queue, 
> the NM will decide when to allow the execution of a queued container.
> In order to ensure the instantaneous start of a guaranteed-start container, 
> the NM may decide to pre-empt/kill running queueable containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-2883) Queuing of container requests in the NM

2016-04-01 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221849#comment-15221849
 ] 

Karthik Kambatla commented on YARN-2883:


Discussed this briefly with [~kkaranasos]. Our opinions appear to differ on the 
scope of {{ContainersMonitorImpl}} - my opinion is this class should monitor 
only running containers and shouldn't be aware of containers that have not 
running yet or play part in decision making of when containers should start 
running. It appears [~kkaranasos] believes this class should also monitor when 
the non-running containers should start running. 

IMO, expanding the scope of {{ContainersMonitorImpl}} makes the code hard to 
follow as we have communication going both ways between 
{{ContainersMonitorImpl}} and {{ContainersManagerImpl}}. Also, 
{{ContainersMonitorImpl}} will then have state that needs to be persisted for a 
work-preserving NM restart. Would like to hear others' thoughts. 
[~chris.douglas], [~asuresh] - what do you guys think? /cc [~jlowe]

> Queuing of container requests in the NM
> ---
>
> Key: YARN-2883
> URL: https://issues.apache.org/jira/browse/YARN-2883
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2883-trunk.004.patch, YARN-2883-trunk.005.patch, 
> YARN-2883-trunk.006.patch, YARN-2883-trunk.007.patch, 
> YARN-2883-trunk.008.patch, YARN-2883-yarn-2877.001.patch, 
> YARN-2883-yarn-2877.002.patch, YARN-2883-yarn-2877.003.patch, 
> YARN-2883-yarn-2877.004.patch
>
>
> We propose to add a queue in each NM, where queueable container requests can 
> be held.
> Based on the available resources in the node and the containers in the queue, 
> the NM will decide when to allow the execution of a queued container.
> In order to ensure the instantaneous start of a guaranteed-start container, 
> the NM may decide to pre-empt/kill running queueable containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4821) have a separate NM timeline publishing interval

2016-04-01 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221831#comment-15221831
 ] 

Naganarasimha G R commented on YARN-4821:
-

Thanks [~sjlee0] for detailing your idea. 
bq. More importantly, if one needs to modify the resource monitoring interval, 
he/she should be aware of the implication it would have on the timeline 
publishing, or it's easy to miss out that connection and make a mistake.
IMO in the other approach/example which you mentioned also requires user to 
consider the relation between resource monitoring interval and the publishing 
interval, as events are not published as per the publishing interval. 
So in the approach which i have proposed it would be easy for user to 
understand and configure as events will be published based on multiple of 
resource monitoring interval. like as per your example user needs to just 
configure "3" . But i agree that if he is not aware that timeline publishing is 
tied with resource monitoring interval then he might miss to reconfigure if he 
configures the later.

bq. We could also consider different intervals for CPU and memory, although one 
could argue that the YARN resource monitoring does not do that so we probably 
don't need to differentiate them. That's just my 2 cents
Yeah i agree to keep it simple we can keep the same but for example if its 10 
seconds then might be cpu usage we dont get the better picture of actual 
utilization.


> have a separate NM timeline publishing interval
> ---
>
> Key: YARN-4821
> URL: https://issues.apache.org/jira/browse/YARN-4821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Naganarasimha G R
>  Labels: yarn-2928-1st-milestone
>
> Currently the interval with which NM publishes container CPU and memory 
> metrics is tied to {{yarn.nodemanager.resource-monitor.interval-ms}} whose 
> default is 3 seconds. This is too aggressive.
> There should be a separate configuration that controls how often 
> {{NMTimelinePublisher}} publishes container metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4910) Fix incomplete log info in ResourceLocalizationService

2016-04-01 Thread Jun Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Gong updated YARN-4910:
---
Attachment: YARN-4910.01.patch

Attach a simple patch to fix it.

> Fix incomplete log info in ResourceLocalizationService
> --
>
> Key: YARN-4910
> URL: https://issues.apache.org/jira/browse/YARN-4910
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jun Gong
>Assignee: Jun Gong
>Priority: Trivial
> Attachments: YARN-4910.01.patch
>
>
> When debugging, find a lot of incomplete log info from 
> ResourceLocalizationService, it is a little confusing.
> {quote}
> 2016-03-30 22:47:29,703 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
>  Writing credentials to the nmPrivate file 
> /data6/yarnenv/local/nmPrivate/container_1456839788316_4159_01_04_37.tokens.
>  Credentials list:
> {quote}
> The content of credentials list will only be printed for DEBUG log level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4910) Fix incomplete log info in ResourceLocalizationService

2016-04-01 Thread Jun Gong (JIRA)
Jun Gong created YARN-4910:
--

 Summary: Fix incomplete log info in ResourceLocalizationService
 Key: YARN-4910
 URL: https://issues.apache.org/jira/browse/YARN-4910
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Jun Gong
Assignee: Jun Gong
Priority: Trivial


When debugging, find a lot of incomplete log info from 
ResourceLocalizationService, it is a little confusing.
{quote}
2016-03-30 22:47:29,703 INFO 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService:
 Writing credentials to the nmPrivate file 
/data6/yarnenv/local/nmPrivate/container_1456839788316_4159_01_04_37.tokens.
 Credentials list:
{quote}
The content of credentials list will only be printed for DEBUG log level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4746) yarn web services should convert parse failures of appId to 400

2016-04-01 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221777#comment-15221777
 ] 

Naganarasimha G R commented on YARN-4746:
-

Hi [~bibinchundatt],
   Overall approach seems to be fine but seems like [~varun_saxena]'s 
comment is not considered. Can you take a look at it ?

> yarn web services should convert parse failures of appId to 400
> ---
>
> Key: YARN-4746
> URL: https://issues.apache.org/jira/browse/YARN-4746
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: 0001-YARN-4746.patch, 0002-YARN-4746.patch, 
> 0003-YARN-4746.patch, 0003-YARN-4746.patch, 0004-YARN-4746.patch
>
>
> I'm seeing somewhere in the WS API tests of mine an error with exception 
> conversion of  a bad app ID sent in as an argument to a GET. I know it's in 
> ATS, but a scan of the core RM web services implies a same problem
> {{WebServices.parseApplicationId()}} uses {{ConverterUtils.toApplicationId}} 
> to convert an argument; this throws IllegalArgumentException, which is then 
> handled somewhere by jetty as a 500 error.
> In fact, it's a bad argument, which should be handled by returning a 400. 
> This can be done by catching the raised argument and explicitly converting it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221761#comment-15221761
 ] 

Varun Vasudev edited comment on YARN-4857 at 4/1/16 2:31 PM:
-

[~lewuathe] - can you fix the checkstyle issues please?

[~leftnoteasy] - do you see any issue with moving the pre-emption configs into 
YarnConfiguartion? Are they CapacityScheduler specific?


was (Author: vvasudev):
[~lewuathe] - can you fix the checkstyle issues please?

[~leftnoteasy] - do you see any issue with moving the pre-emption configs into 
YarnConfiguartion?

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch, YARN-4857.02.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221761#comment-15221761
 ] 

Varun Vasudev commented on YARN-4857:
-

[~lewuathe] - can you fix the checkstyle issues please?

[~leftnoteasy] - do you see any issue with moving the pre-emption configs into 
YarnConfiguartion?

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch, YARN-4857.02.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4906) Capture container start/finish time in container metrics

2016-04-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221758#comment-15221758
 ] 

Varun Vasudev commented on YARN-4906:
-

Thanks for the patch [~jianhe]. Couple of things -
# Do you think we should add a duration metric? For longer running containers, 
it would be a useful metric. You'll have to use the MonotonicClock for that 
though.
# {code}
long unrgisterDelay = conf.getLong(
+  YarnConfiguration.NM_CONTAINER_METRICS_UNREGISTER_DELAY_MS,
+  YarnConfiguration.DEFAULT_NM_CONTAINER_METRICS_UNREGISTER_DELAY_MS);
{code}
Typo in the spelling of unregisterDelay
# The start time won't be preserved during a work preserving NM restart - this 
is also true for the other statistical metrics like the histograms, and the 
advanced actual resource usage metrics. I'm open to fixing this in a follow up 
JIRA.



> Capture container start/finish time in container metrics
> 
>
> Key: YARN-4906
> URL: https://issues.apache.org/jira/browse/YARN-4906
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-4906.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (YARN-4906) Capture container start/finish time in container metrics

2016-04-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221758#comment-15221758
 ] 

Varun Vasudev edited comment on YARN-4906 at 4/1/16 2:27 PM:
-

Thanks for the patch [~jianhe]. Few things -
# Do you think we should add a duration metric? For longer running containers, 
it would be a useful metric. You'll have to use the MonotonicClock for that 
though.
# {code}
long unrgisterDelay = conf.getLong(
+  YarnConfiguration.NM_CONTAINER_METRICS_UNREGISTER_DELAY_MS,
+  YarnConfiguration.DEFAULT_NM_CONTAINER_METRICS_UNREGISTER_DELAY_MS);
{code}
Typo in the spelling of unregisterDelay
# The start time won't be preserved during a work preserving NM restart - this 
is also true for the other statistical metrics like the histograms, and the 
advanced actual resource usage metrics. I'm open to fixing this in a follow up 
JIRA.




was (Author: vvasudev):
Thanks for the patch [~jianhe]. Couple of things -
# Do you think we should add a duration metric? For longer running containers, 
it would be a useful metric. You'll have to use the MonotonicClock for that 
though.
# {code}
long unrgisterDelay = conf.getLong(
+  YarnConfiguration.NM_CONTAINER_METRICS_UNREGISTER_DELAY_MS,
+  YarnConfiguration.DEFAULT_NM_CONTAINER_METRICS_UNREGISTER_DELAY_MS);
{code}
Typo in the spelling of unregisterDelay
# The start time won't be preserved during a work preserving NM restart - this 
is also true for the other statistical metrics like the histograms, and the 
advanced actual resource usage metrics. I'm open to fixing this in a follow up 
JIRA.



> Capture container start/finish time in container metrics
> 
>
> Key: YARN-4906
> URL: https://issues.apache.org/jira/browse/YARN-4906
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-4906.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4909) Fix intermittent failures of TestRMWebServices And TestRMWithCSRFFilter

2016-04-01 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created YARN-4909:
--

 Summary: Fix intermittent failures of TestRMWebServices And 
TestRMWithCSRFFilter
 Key: YARN-4909
 URL: https://issues.apache.org/jira/browse/YARN-4909
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Brahma Reddy Battula



 *Precommit link* 

https://builds.apache.org/job/PreCommit-YARN-Build/10908/testReport/
*Trace* 
{noformat}
com.sun.jersey.test.framework.spi.container.TestContainerException: 
java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:463)
at sun.nio.ch.Net.bind(Net.java:455)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.glassfish.grizzly.nio.transport.TCPNIOTransport.bind(TCPNIOTransport.java:413)
at 
org.glassfish.grizzly.nio.transport.TCPNIOTransport.bind(TCPNIOTransport.java:384)
at 
org.glassfish.grizzly.nio.transport.TCPNIOTransport.bind(TCPNIOTransport.java:375)
at 
org.glassfish.grizzly.http.server.NetworkListener.start(NetworkListener.java:549)
at 
org.glassfish.grizzly.http.server.HttpServer.start(HttpServer.java:255)
at 
com.sun.jersey.api.container.grizzly2.GrizzlyServerFactory.createHttpServer(GrizzlyServerFactory.java:326)
at 
com.sun.jersey.api.container.grizzly2.GrizzlyServerFactory.createHttpServer(GrizzlyServerFactory.java:343)
at 
com.sun.jersey.test.framework.spi.container.grizzly2.web.GrizzlyWebTestContainerFactory$GrizzlyWebTestContainer.instantiateGrizzlyWebServer(GrizzlyWebTestContainerFactory.java:219)
at 
com.sun.jersey.test.framework.spi.container.grizzly2.web.GrizzlyWebTestContainerFactory$GrizzlyWebTestContainer.(GrizzlyWebTestContainerFactory.java:129)
at 
com.sun.jersey.test.framework.spi.container.grizzly2.web.GrizzlyWebTestContainerFactory$GrizzlyWebTestContainer.(GrizzlyWebTestContainerFactory.java:86)
at 
com.sun.jersey.test.framework.spi.container.grizzly2.web.GrizzlyWebTestContainerFactory.create(GrizzlyWebTestContainerFactory.java:79)
at 
com.sun.jersey.test.framework.JerseyTest.getContainer(JerseyTest.java:342)
at com.sun.jersey.test.framework.JerseyTest.(JerseyTest.java:217)
at 
org.apache.hadoop.yarn.webapp.JerseyTestBase.(JerseyTestBase.java:30)
at 
org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.(TestRMWebServices.java:125)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4893) Fix some intermittent test failures in TestRMAdminService

2016-04-01 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221727#comment-15221727
 ] 

Brahma Reddy Battula commented on YARN-4893:


OK..Raised YARN-4909.

> Fix some intermittent test failures in TestRMAdminService
> -
>
> Key: YARN-4893
> URL: https://issues.apache.org/jira/browse/YARN-4893
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: YARN-4893-002.patch, YARN-4893-003.patch, YARN-4893.patch
>
>
> As discussion in YARN-998, we need to add rm.drainEvents() after 
> rm.registerNode() or some of test could get failed intermittently. Also, we 
> can consider to add rm.drainEvents() within rm.registerNode() that could be 
> more convenient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-3773) hadoop-yarn-server-nodemanager's use of Linux /sbin/tc is non-portable

2016-04-01 Thread Alan Burlison (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221697#comment-15221697
 ] 

Alan Burlison commented on YARN-3773:
-

I hadn't drilled down so I wasn't sure if it did/didn't get executed elsewhere. 
In any case, equivalent functionality is needed on other platforms.

> hadoop-yarn-server-nodemanager's use of Linux /sbin/tc is non-portable
> --
>
> Key: YARN-3773
> URL: https://issues.apache.org/jira/browse/YARN-3773
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
> Environment: BSD OSX Solaris Windows Linux
>Reporter: Alan Burlison
>Assignee: Alan Burlison
>
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
>  makes use of the Linux-only executable /sbin/tc 
> (http://lartc.org/manpages/tc.txt)  but there is no corresponding 
> functionality for non-Linux platforms. The code in question also seems to try 
> to execute tc even on platforms where it will never exist.
> Other platforms provide similar functionality, e.g. Solaris has an extensive 
> range of network management features 
> (http://www.oracle.com/technetwork/articles/servers-storage-admin/o11-095-s11-app-traffic-525038.html).
>  Work is needed to abstract the network management features of Yarn so that 
> the same facilities for network management can be provided on all platforms 
> that provide the requisite functionality,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4893) Fix some intermittent test failures in TestRMAdminService

2016-04-01 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221693#comment-15221693
 ] 

Junping Du commented on YARN-4893:
--

bq. May be it's fine to kick Jenkins and a clean run again. Junping Du, is that 
fine?
Sure. I just kick off Jenkins test again on this.

bq. If you want to me raise seperate issue for TestRMWebServices and 
TestRMWithCSRFFilter, will raise.
That would be nice. We need to track these intermittent failures also. If not, 
we never get a chance to fix it.

> Fix some intermittent test failures in TestRMAdminService
> -
>
> Key: YARN-4893
> URL: https://issues.apache.org/jira/browse/YARN-4893
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Brahma Reddy Battula
>Priority: Blocker
> Attachments: YARN-4893-002.patch, YARN-4893-003.patch, YARN-4893.patch
>
>
> As discussion in YARN-998, we need to add rm.drainEvents() after 
> rm.registerNode() or some of test could get failed intermittently. Also, we 
> can consider to add rm.drainEvents() within rm.registerNode() that could be 
> more convenient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221646#comment-15221646
 ] 

Hadoop QA commented on YARN-4857:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 52s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 0s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 34s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 7 new + 
263 unchanged - 8 fixed = 270 total (was 271) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 50s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 42s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s 
{color} | 

[jira] [Commented] (YARN-4895) Add subtractFrom method to ResourceUtilization class

2016-04-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221550#comment-15221550
 ] 

Varun Vasudev commented on YARN-4895:
-

The latest patch looks fine to me. Please go ahead and commit it.

> Add subtractFrom method to ResourceUtilization class
> 
>
> Key: YARN-4895
> URL: https://issues.apache.org/jira/browse/YARN-4895
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4895.001.patch, YARN-4895.002.patch
>
>
> In ResourceUtilization class, there is already an addTo method. 
> For completeness, here we are adding the dual subtractFrom method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (YARN-4908) Move preemption configurations of CapacityScheduler to YarnConfiguration

2016-04-01 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki resolved YARN-4908.
--
  Resolution: Duplicate
Release Note: Since YARN-4857 is reopened.

> Move preemption configurations of CapacityScheduler to YarnConfiguration
> 
>
> Key: YARN-4908
> URL: https://issues.apache.org/jira/browse/YARN-4908
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>
> In YARN-4857, preemption configurations of CapacitySchedulers are written in 
> yarn-default.xml. Since TestYarnConfigurationFields checks the fields in 
> yarn-default.xml and YarnConfiguration, we need to move them 
> YarnConfiguration as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated YARN-4857:
-
Attachment: YARN-4857.02.patch

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch, YARN-4857.02.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221312#comment-15221312
 ] 

Hudson commented on YARN-4857:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9540 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9540/])
Revert "YARN-4857. Add missing default configuration regarding (vvasudev: rev 
3488c4f2c9767684eb1007bb00250f474c06d5d8)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml


> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221282#comment-15221282
 ] 

Varun Vasudev commented on YARN-4857:
-

Pushed revert to trunk and branch-2. Confirmed that TestYarnConfiguration 
passes after revert.

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4908) Move preemption configurations of CapacityScheduler to YarnConfiguration

2016-04-01 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4908?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221277#comment-15221277
 ] 

Sunil G commented on YARN-4908:
---

I think it's better suited in CS xml. So test case can be properly changed. Cc/ 
[~leftnoteasy] 

> Move preemption configurations of CapacityScheduler to YarnConfiguration
> 
>
> Key: YARN-4908
> URL: https://issues.apache.org/jira/browse/YARN-4908
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>
> In YARN-4857, preemption configurations of CapacitySchedulers are written in 
> yarn-default.xml. Since TestYarnConfigurationFields checks the fields in 
> yarn-default.xml and YarnConfiguration, we need to move them 
> YarnConfiguration as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev reopened YARN-4857:
-

Re-opening issue due to failing unit tests.

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221273#comment-15221273
 ] 

Kai Sasaki commented on YARN-4857:
--

Sure, I'll do that accordingly.

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221271#comment-15221271
 ] 

Varun Vasudev commented on YARN-4857:
-

I'm fine with that. [~lewuathe] - can you upload a new patch to this ticket 
with the yarn-default.xml changes and the refactoring to move the variables 
into YarnConfiguration?

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221265#comment-15221265
 ] 

Brahma Reddy Battula commented on YARN-4857:


IMO,can we revert this and update patch here..?

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (YARN-4908) Move preemption configurations of CapacityScheduler to YarnConfiguration

2016-04-01 Thread Kai Sasaki (JIRA)
Kai Sasaki created YARN-4908:


 Summary: Move preemption configurations of CapacityScheduler to 
YarnConfiguration
 Key: YARN-4908
 URL: https://issues.apache.org/jira/browse/YARN-4908
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Kai Sasaki
Assignee: Kai Sasaki
Priority: Minor


In YARN-4857, preemption configurations of CapacitySchedulers are written in 
yarn-default.xml. Since TestYarnConfigurationFields checks the fields in 
yarn-default.xml and YarnConfiguration, we need to move them YarnConfiguration 
as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221263#comment-15221263
 ] 

Brahma Reddy Battula commented on YARN-4857:


Yes

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221256#comment-15221256
 ] 

Kai Sasaki commented on YARN-4857:
--

Sure, I'll do that. Thanks [~brahmareddy] and [~vvasudev].

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221249#comment-15221249
 ] 

Varun Vasudev edited comment on YARN-4857 at 4/1/16 6:39 AM:
-

Thanks for pointing out the failure [~brahmareddy]. My apologies for not 
catching it.

[~lewuathe] - let's move them to YarnConfiguration - no need for them to be 
embedded in the class. Can you please file a new JIRA for that? Thanks!


was (Author: vvasudev):
[~lewuathe] - let's move them to YarnConfiguration - no need for them to be 
embedded in the class. Can you please file a new JIRA for that? Thanks!

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221249#comment-15221249
 ] 

Varun Vasudev commented on YARN-4857:
-

[~lewuathe] - let's move them to YarnConfiguration - no need for them to be 
embedded in the class. Can you please file a new JIRA for that? Thanks!

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4857) Add missing default configuration regarding preemption of CapacityScheduler

2016-04-01 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221230#comment-15221230
 ] 

Kai Sasaki commented on YARN-4857:
--

[~brahmareddy] Thanks for letting me know. Should we move these configuration 
to {{YarnConfiguration}}?

> Add missing default configuration regarding preemption of CapacityScheduler
> ---
>
> Key: YARN-4857
> URL: https://issues.apache.org/jira/browse/YARN-4857
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacity scheduler, documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: documentaion
> Fix For: 2.9.0
>
> Attachments: YARN-4857.01.patch
>
>
> {{yarn.resourcemanager.monitor.*}} configurations are missing in 
> yarn-default.xml. Since they were documented explicitly by YARN-4492, 
> yarn-default.xml can be modified as same.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)