[jira] [Commented] (YARN-5063) Fail to launch AM continuously on a lost NM

2016-05-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277664#comment-15277664
 ] 

Rohith Sharma K S commented on YARN-5063:
-

Thanks for clarifying my doubts. I think this is very good scenario for AM node 
blacklisting to be consider. Keeping cc:/ [~sunilg] [~vvasudev] [~vinodkv] let 
also they know about scenario. As I said, there are design level issue in 
YARN-2005, so need to wait for proper solution. Ex : Node is not reachable so 
that node is blacklisted. Since other nodes are busy, the same  node comes 
back(registered) where in new node is not considered for allocation. See 
YARN-4685

> Fail to launch AM continuously on a lost NM
> ---
>
> Key: YARN-5063
> URL: https://issues.apache.org/jira/browse/YARN-5063
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Jun Gong
>Assignee: Jun Gong
>
> If a NM node shuts down, RM will not mark it as LOST until liveness monitor 
> finds it timeout. However before that, RM might continuously allocate AM on 
> that NM.
> We found this case in our cluster: RM continuously allocated a same AM on a 
> lost NM before RM found it lost, and AMLauncher always failed because it 
> could not connect to the lost NM. To solve the problem, we could add the NM 
> to AM blacklist if RM failed to launch it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5063) Fail to launch AM continuously on a lost NM

2016-05-09 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277648#comment-15277648
 ] 

Jun Gong commented on YARN-5063:


Thanks [~rohithsharma] for looking into the issue.

{quote}
Is scheduler is enabled with async scheduling mode? In normal way, allocation 
will happen when there is node heartbeat is received. If node is shutdown, node 
does not send heartbeat. I am thinking how can RM allocate container to same 
node if NM is shutdown provided async scheduling mode is not enabled. Am I 
missing any critical point here?
{quote}
Yes, we use FairScheduler with continuousScheduling enabled.

{quote}
What is the reason for launch failure? YARN-2005 provide support for 
blacklisting scheduling AMs node but it has design level issue which would 
cause issue like YARN-4685
{quote}
AMLauncher failed to launch AM because the NM had already shut down and it can 
not connect to it, then RMAppAttempt's state transitioned from 
*RMAppAttemptState.ALLOCATED* to *RMAppAttemptState.FINAL_SAVING* with 
receiving event *RMAppAttemptEventType.LAUNCH_FAILED*. However now we are 
considering add NM to blacklist only for cases that RMAppAttempt's state 
transitioned from *RMAppAttemptState.RUNNING* to 
*RMAppAttemptState.FINAL_SAVING*.

> Fail to launch AM continuously on a lost NM
> ---
>
> Key: YARN-5063
> URL: https://issues.apache.org/jira/browse/YARN-5063
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Jun Gong
>Assignee: Jun Gong
>
> If a NM node shuts down, RM will not mark it as LOST until liveness monitor 
> finds it timeout. However before that, RM might continuously allocate AM on 
> that NM.
> We found this case in our cluster: RM continuously allocated a same AM on a 
> lost NM before RM found it lost, and AMLauncher always failed because it 
> could not connect to the lost NM. To solve the problem, we could add the NM 
> to AM blacklist if RM failed to launch it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4842) "yarn logs" command should not require the appOwner argument

2016-05-09 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277644#comment-15277644
 ] 

Vinod Kumar Vavilapalli commented on YARN-4842:
---

Same as before, the findbugs and unit test issues are unrelated - will make 
sure JIRAs exist for these, have become too regular now. The checkstyle 
warnings are spurious.

Checking this in.

> "yarn logs" command should not require the appOwner argument
> 
>
> Key: YARN-4842
> URL: https://issues.apache.org/jira/browse/YARN-4842
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ram Venkatesh
>Assignee: Xuan Gong
> Attachments: YARN-4842.1.patch, YARN-4842.2.patch, YARN-4842.3.patch, 
> YARN-4842.4.patch, YARN-4842.5.patch, YARN-4842.6.patch
>
>
> The yarn logs command is among the most common ways to troubleshoot yarn app 
> failures, especially by an admin.
> Currently if you run the command as a user different from the job owner, the 
> command will fail with a subtle message that it could not find the app under 
> the running user's name. This can be confusing especially to new admins.
> We can figure out the job owner from the app report returned by the RM or the 
> AHS, or, by looking for the app directory using a glob pattern, so in most 
> cases this error can be avoided.
> Question - are there scenarios where users will still need to specify the 
> -appOwner option?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5047) Refactor nodeUpdate() from FairScheduler and CapacityScheduler

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277643#comment-15277643
 ] 

Hadoop QA commented on YARN-5047:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
31s {color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
39s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 23s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 50s 
{color} | {color:red} root: patch generated 1 new + 321 unchanged - 15 fixed = 
322 total (was 336) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 51s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 34s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 4s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.7.0_95. 
{color} |
| 

[jira] [Commented] (YARN-5049) Extend NMStateStore to save queued container information

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277628#comment-15277628
 ] 

Hadoop QA commented on YARN-5049:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 patch generated 1 new + 143 unchanged - 0 fixed = 144 total (was 143) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 48s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 42s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803143/YARN-5049.001.patch |
| JIRA Issue | YARN-5049 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 48bc4d47f798 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| 

[jira] [Commented] (YARN-5063) Fail to launch AM continuously on a lost NM

2016-05-09 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277602#comment-15277602
 ] 

Rohith Sharma K S commented on YARN-5063:
-

bq. If a NM node shuts down, RM will not mark it as LOST until liveness monitor 
finds it timeout. However before that, RM might continuously allocate AM on 
that NM.
Is scheduler is enabled with async scheduling mode? In normal way, allocation 
will happen when there is node heartbeat is received. If node is shutdown, node 
does not send heartbeat. I am thinking how can RM allocate container to same 
node if NM is shutdown provided async scheduling mode is not enabled. Am I 
missing any critical point here?

bq. we could add the NM to AM blacklist if RM failed to launch it.
What is the reason for launch failure? YARN-2005 provide support for 
blacklisting scheduling AMs node but it has design level issue which would 
cause issue like YARN-4685

> Fail to launch AM continuously on a lost NM
> ---
>
> Key: YARN-5063
> URL: https://issues.apache.org/jira/browse/YARN-5063
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Jun Gong
>Assignee: Jun Gong
>
> If a NM node shuts down, RM will not mark it as LOST until liveness monitor 
> finds it timeout. However before that, RM might continuously allocate AM on 
> that NM.
> We found this case in our cluster: RM continuously allocated a same AM on a 
> lost NM before RM found it lost, and AMLauncher always failed because it 
> could not connect to the lost NM. To solve the problem, we could add the NM 
> to AM blacklist if RM failed to launch it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4842) "yarn logs" command should not require the appOwner argument

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277586#comment-15277586
 ] 

Hadoop QA commented on YARN-4842:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 30s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 51s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 3 new + 
77 unchanged - 17 fixed = 80 total (was 94) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 40s {color} 
| {color:red} hadoop-yarn-common in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 29s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 4s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 15s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Updated] (YARN-4577) Enable aux services to have their own custom classpath/jar file

2016-05-09 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4577:

Attachment: YARN-4577.20160509.patch

Attached a new patch with a test case.

> Enable aux services to have their own custom classpath/jar file
> ---
>
> Key: YARN-4577
> URL: https://issues.apache.org/jira/browse/YARN-4577
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4577.1.patch, YARN-4577.2.patch, 
> YARN-4577.20160119.1.patch, YARN-4577.20160204.patch, 
> YARN-4577.20160428.patch, YARN-4577.20160509.patch, YARN-4577.3.patch, 
> YARN-4577.3.rebase.patch, YARN-4577.4.patch, YARN-4577.5.patch, 
> YARN-4577.poc.patch
>
>
> Right now, users have to add their jars to the NM classpath directly, thus 
> put them on the system classloader. But if multiple versions of the plugin 
> are present on the classpath, there is no control over which version actually 
> gets loaded. Or if there are any conflicts between the dependencies 
> introduced by the auxiliary service and the NM itself, they can break the NM, 
> the auxiliary service, or both.
> The solution could be: to instantiate aux services using a classloader that 
> is different from the system classloader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277555#comment-15277555
 ] 

Hadoop QA commented on YARN-4994:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 45s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 50s {color} 
| {color:red} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 3 new + 660 
unchanged - 3 fixed = 663 total (was 663) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 4s {color} 
| {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 new + 669 
unchanged - 3 fixed = 672 total (was 672) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 27s 
{color} | {color:red} root: patch generated 5 new + 208 unchanged - 4 fixed = 
213 total (was 212) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 26s {color} 
| {color:red} hadoop-yarn-server-tests in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 15s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 9s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed with 
JDK v1.8.0_91. {color} |
| {color:green}+1{color} 

[jira] [Updated] (YARN-4738) Notify the RM about the status of OPPORTUNISTIC containers

2016-05-09 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-4738:
-
Attachment: YARN-4738-yarn-2877.002.patch

New version of the patch.
Test cases to be added.

> Notify the RM about the status of OPPORTUNISTIC containers
> --
>
> Key: YARN-4738
> URL: https://issues.apache.org/jira/browse/YARN-4738
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-4738-yarn-2877.001.patch, 
> YARN-4738-yarn-2877.002.patch
>
>
> When an OPPORTUNISTIC container finishes its execution (either successfully 
> or because it failed/got killed), the RM needs to be notified.
> This way the AM also gets notified in turn about the successfully completed 
> tasks, as well as for rescheduling failed/killed tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5049) Extend NMStateStore to save queued container information

2016-05-09 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5049:
-
Attachment: YARN-5049.001.patch

Adding patch for this JIRA. 
Will add test cases shortly.

> Extend NMStateStore to save queued container information
> 
>
> Key: YARN-5049
> URL: https://issues.apache.org/jira/browse/YARN-5049
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5049.001.patch
>
>
> This JIRA is about extending the NMStateStore to save queued container 
> information whenever a new container is added to the NM queue. 
> It also removes the information from the state store when the queued 
> container starts its execution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5053) More informative diagnostics when applications killed by a user

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277486#comment-15277486
 ] 

Hadoop QA commented on YARN-5053:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 19s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 1 new + 33 unchanged - 0 fixed = 34 total (was 33) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 29m 26s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 30m 33s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestContainerResourceUsage |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
| JDK v1.7.0_95 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | 

[jira] [Commented] (YARN-5005) TestRMWebServices#testDumpingSchedulerLogs fails randomly

2016-05-09 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277456#comment-15277456
 ] 

Bibin A Chundatt commented on YARN-5005:


[~rohithsharma]
Could you help me in review.

> TestRMWebServices#testDumpingSchedulerLogs fails randomly
> -
>
> Key: YARN-5005
> URL: https://issues.apache.org/jira/browse/YARN-5005
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-5005.patch
>
>
> {noformat}
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Appender is already 
> dumping logs
>   at 
> org.apache.hadoop.yarn.util.AdHocLogDumper.dumpLogs(AdHocLogDumper.java:65)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices.dumpSchedulerLogs(RMWebServices.java:321)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServices.testDumpingSchedulerLogs(TestRMWebServices.java:674)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>   at java.lang.reflect.Method.invoke(Unknown Source)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
>   at 
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:675)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
>   at 
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
> {noformat}
> First dumpLog is set to dump logs for 1 sec
> {noformat}
> webSvc.dumpSchedulerLogs("1", mockHsr);
> Thread.sleep(1000);
> {noformat}
> sleep(1000) is used wait for completion but randomly during testcase run the 
> log dump is called again with in 1 sec.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5024) TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers random failure

2016-05-09 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5024?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277455#comment-15277455
 ] 

Bibin A Chundatt commented on YARN-5024:


IIUC Incase of completed containers MockRM will wait till timeout always .  
{{getResourceScheduler().getRMContainer(containerId)}} is {{null}} in  
{noformat}
waitForState(MockNM nm, ContainerId containerId,
RMContainerState containerState)
{noformat}
please do correct me if i am wrong.


> TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers 
> random failure
> ---
>
> Key: YARN-5024
> URL: https://issues.apache.org/jira/browse/YARN-5024
> Project: Hadoop YARN
>  Issue Type: Test
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: 0001-YARN-5024.patch, 0002-YARN-5024.patch, 
> 0003-YARN-5024.patch
>
>
> Random Testcase failure for 
> {{TestContainerResourceUsage#testUsageAfterAMRestartWithMultipleContainers}}
> {noformat}
> java.lang.AssertionError: Unexcpected MemorySeconds value 
> expected:<-1497214794931> but was:<1913>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.amRestartTests(TestContainerResourceUsage.java:395)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.TestContainerResourceUsage.testUsageAfterAMRestartWithMultipleContainers(TestContainerResourceUsage.java:252)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4899) Queue metrics of SLS capacity scheduler only activated after app submit to the queue

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277452#comment-15277452
 ] 

Hadoop QA commented on YARN-4899:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-tools/hadoop-sls: patch generated 5 new + 98 
unchanged - 0 fixed = 103 total (was 98) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s 
{color} | {color:green} hadoop-sls in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:cf2ee45 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803092/YARN-4899.1.patch |
| JIRA Issue | YARN-4899 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4a0e70eaf064 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-5064) move the shell code out of hadoop-yarn

2016-05-09 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277440#comment-15277440
 ] 

Vinod Kumar Vavilapalli commented on YARN-5064:
---

I see, hadoop-yarn-common seems like the better solution then. But either ways..

> move the shell code out of hadoop-yarn
> --
>
> Key: YARN-5064
> URL: https://issues.apache.org/jira/browse/YARN-5064
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: scripts, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> We need to move the shell code out of hadoop-yarn so that we can properly 
> build test infrastructure for it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4900) SLS MRAMSimulator should include scheduledMappers/Reducers when re-request failed tasks

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277437#comment-15277437
 ] 

Hadoop QA commented on YARN-4900:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s {color} 
| {color:red} YARN-4900 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12796173/YARN-4900.1.patch |
| JIRA Issue | YARN-4900 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/11387/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> SLS MRAMSimulator should include scheduledMappers/Reducers when re-request 
> failed tasks
> ---
>
> Key: YARN-4900
> URL: https://issues.apache.org/jira/browse/YARN-4900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4900.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5047) Refactor nodeUpdate() from FairScheduler and CapacityScheduler

2016-05-09 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-5047:
-
Attachment: YARN-5047.001.patch

- First attempt at refactoring

> Refactor nodeUpdate() from FairScheduler and CapacityScheduler
> --
>
> Key: YARN-5047
> URL: https://issues.apache.org/jira/browse/YARN-5047
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, fairscheduler, scheduler
>Affects Versions: 3.0.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5047.001.patch
>
>
> FairScheduler#nodeUpdate() and CapacityScheduler#nodeUpdate() have a lot of 
> commonality in their code.  See about refactoring the common parts into 
> AbstractYARNScheduler.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4844) Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277325#comment-15277325
 ] 

Hadoop QA commented on YARN-4844:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 55 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
46s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 12s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common in 
trunk has 3 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 2s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 2m 20s 
{color} | {color:red} root: patch generated 192 new + 4352 unchanged - 117 
fixed = 4544 total (was 4469) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
45s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 30s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 4s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | 

[jira] [Commented] (YARN-4900) SLS MRAMSimulator should include scheduledMappers/Reducers when re-request failed tasks

2016-05-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277289#comment-15277289
 ] 

Jian He commented on YARN-4900:
---

looks good,  waiting on YARN-4779 to commit

> SLS MRAMSimulator should include scheduledMappers/Reducers when re-request 
> failed tasks
> ---
>
> Key: YARN-4900
> URL: https://issues.apache.org/jira/browse/YARN-4900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4900.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4778) Support specifying resources for task containers in SLS

2016-05-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277268#comment-15277268
 ] 

Hudson commented on YARN-4778:
--

FAILURE: Integrated in Hadoop-trunk-Commit #9737 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9737/])
YARN-4778. Support specifying resources for task containers in SLS. (jianhe: 
rev 996a210ab0131606639ba87fd5daab14bf05b35f)
* 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java


> Support specifying resources for task containers in SLS
> ---
>
> Key: YARN-4778
> URL: https://issues.apache.org/jira/browse/YARN-4778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 2.9.0
>
> Attachments: YARN-4778.1.patch
>
>
> Currently, SLS doesn't support specify resources for task containers, it uses 
> a global default value for all containers.
> Instead, we should be able to specify different resources for task containers 
> in sls-job.conf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4899) Queue metrics of SLS capacity scheduler only activated after app submit to the queue

2016-05-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277239#comment-15277239
 ] 

Jian He commented on YARN-4899:
---

lgtm, kick jenkins

> Queue metrics of SLS capacity scheduler only activated after app submit to 
> the queue
> 
>
> Key: YARN-4899
> URL: https://issues.apache.org/jira/browse/YARN-4899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4899.1.patch
>
>
> We should start recording queue metrics since cluster start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4779) Fix AM container allocation logic in SLS

2016-05-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277209#comment-15277209
 ] 

Jian He commented on YARN-4779:
---

- we can convert the appId to jobId and to a map lookup ?
{code}
for (AMSimulator ams : amMap.values()) {
{code}

- added variable not updated?
{code}
int added = 0;
for (ContainerSimulator cs : allMaps) {
  if (added >= mapTotal - mapFinished) {
break;
  }
  pendingMaps.add(cs);
}

// And same, only add totalReduces - finishedReduces
added = 0;
for (ContainerSimulator cs : allReduces) {
  if (added >= reduceTotal - reduceFinished) {
break;
  }
  pendingReduces.add(cs);
}
{code}
- revert MRAMSimulator#sendContainerRequest method changes
- remove AMSimulator#getApplicationAttemptId 
- why call restart immediately after launch ?
{code}
super.notifyAMContainerLaunched(masterContainer);
restart();
{code}
- why 1L ? the underlying code use -1 for AM container
{code}
se.getNmMap().get(amContainer.getNodeId())
.addNewContainer(amContainer, 1L);
{code}

> Fix AM container allocation logic in SLS
> 
>
> Key: YARN-4779
> URL: https://issues.apache.org/jira/browse/YARN-4779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4779.1.patch, YARN-4779.2.patch, YARN-4779.3.patch
>
>
> Currently, SLS uses unmanaged AM for simulated map-reduce applications. And 
> first allocated container for each app is considered to be the master 
> container.
> This could be problematic when preemption happens. CapacityScheduler preempt 
> AM container at lowest priority, but the simulated AM container isn't 
> recognized by scheduler -- it is a normal container from scheduler's 
> perspective.
> This JIRA tries to fix this logic: do the real AM allocation instead of using 
> unmanaged AM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5053) More informative diagnostics when applications killed by a user

2016-05-09 Thread Eric Badger (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-5053:
--
Attachment: YARN-5053.001.patch

Attaching patch. 

> More informative diagnostics when applications killed by a user
> ---
>
> Key: YARN-5053
> URL: https://issues.apache.org/jira/browse/YARN-5053
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Jason Lowe
>Assignee: Eric Badger
> Attachments: YARN-5053.001.patch
>
>
> When an application kill request is processed by the ClientRMService it sets 
> the diagnostics to "Application killed by user".  It would be nice to report 
> the user and host that issued the kill request in the app diagnostics so it 
> is clear where the kill originated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5066) Support specifying resources for AM containers in SLS

2016-05-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan resolved YARN-5066.
--
Resolution: Duplicate

Created same JIRA twice by mistake.

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5066
> URL: https://issues.apache.org/jira/browse/YARN-5066
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5067) Support specifying resources for AM containers in SLS

2016-05-09 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5067:


 Summary: Support specifying resources for AM containers in SLS
 Key: YARN-5067
 URL: https://issues.apache.org/jira/browse/YARN-5067
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan


Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.

We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5066) Support specifying resources for AM containers in SLS

2016-05-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5066:
-
Issue Type: Bug  (was: Sub-task)
Parent: (was: YARN-5065)

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5066
> URL: https://issues.apache.org/jira/browse/YARN-5066
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5066) Support specifying resources for AM containers in SLS

2016-05-09 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5066:


 Summary: Support specifying resources for AM containers in SLS
 Key: YARN-5066
 URL: https://issues.apache.org/jira/browse/YARN-5066
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan


Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.

We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5065) Umbrella JIRA of SLS fixes / improvements

2016-05-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5065:
-
Description: Umbrella JIRA to track SLS (scheduler load simulator) fixes 
and improvements.

> Umbrella JIRA of SLS fixes / improvements
> -
>
> Key: YARN-5065
> URL: https://issues.apache.org/jira/browse/YARN-5065
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>
> Umbrella JIRA to track SLS (scheduler load simulator) fixes and improvements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4899) Queue metrics of SLS capacity scheduler only activated after app submit to the queue

2016-05-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4899:
-
Attachment: YARN-4899.1.patch

> Queue metrics of SLS capacity scheduler only activated after app submit to 
> the queue
> 
>
> Key: YARN-4899
> URL: https://issues.apache.org/jira/browse/YARN-4899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4899.1.patch
>
>
> We should start recording queue metrics since cluster start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4778) Support specifying resources for task containers in SLS

2016-05-09 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277102#comment-15277102
 ] 

Jian He commented on YARN-4778:
---

lgtm, committing

> Support specifying resources for task containers in SLS
> ---
>
> Key: YARN-4778
> URL: https://issues.apache.org/jira/browse/YARN-4778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4778.1.patch
>
>
> Currently, SLS doesn't support specify resources for task containers, it uses 
> a global default value for all containers.
> Instead, we should be able to specify different resources for task containers 
> in sls-job.conf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4900) SLS MRAMSimulator should include scheduledMappers/Reducers when re-request failed tasks

2016-05-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4900:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-5065

> SLS MRAMSimulator should include scheduledMappers/Reducers when re-request 
> failed tasks
> ---
>
> Key: YARN-4900
> URL: https://issues.apache.org/jira/browse/YARN-4900
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4900.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4778) Support specifying resources for task containers in SLS

2016-05-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4778:
-
Issue Type: Sub-task  (was: Improvement)
Parent: YARN-5065

> Support specifying resources for task containers in SLS
> ---
>
> Key: YARN-4778
> URL: https://issues.apache.org/jira/browse/YARN-4778
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4778.1.patch
>
>
> Currently, SLS doesn't support specify resources for task containers, it uses 
> a global default value for all containers.
> Instead, we should be able to specify different resources for task containers 
> in sls-job.conf.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4779) Fix AM container allocation logic in SLS

2016-05-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4779:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-5065

> Fix AM container allocation logic in SLS
> 
>
> Key: YARN-4779
> URL: https://issues.apache.org/jira/browse/YARN-4779
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4779.1.patch, YARN-4779.2.patch, YARN-4779.3.patch
>
>
> Currently, SLS uses unmanaged AM for simulated map-reduce applications. And 
> first allocated container for each app is considered to be the master 
> container.
> This could be problematic when preemption happens. CapacityScheduler preempt 
> AM container at lowest priority, but the simulated AM container isn't 
> recognized by scheduler -- it is a normal container from scheduler's 
> perspective.
> This JIRA tries to fix this logic: do the real AM allocation instead of using 
> unmanaged AM.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4899) Queue metrics of SLS capacity scheduler only activated after app submit to the queue

2016-05-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4899:
-
Issue Type: Sub-task  (was: Bug)
Parent: YARN-5065

> Queue metrics of SLS capacity scheduler only activated after app submit to 
> the queue
> 
>
> Key: YARN-4899
> URL: https://issues.apache.org/jira/browse/YARN-4899
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>
> We should start recording queue metrics since cluster start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5065) Umbrella JIRA of SLS fixes / improvements

2016-05-09 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5065:


 Summary: Umbrella JIRA of SLS fixes / improvements
 Key: YARN-5065
 URL: https://issues.apache.org/jira/browse/YARN-5065
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wangda Tan






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-05-09 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated YARN-4994:
---
Attachment: YARN-4994.06.patch

With [YARN-4994.06.patch] I am fixing the white space issues what was reported 
by Hadoop QA (BTW, I do not understand why was not reported at the previous 
build).

> Use MiniYARNCluster with try-with-resources in tests
> 
>
> Key: YARN-4994
> URL: https://issues.apache.org/jira/browse/YARN-4994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch, YARN-4994.04.patch, YARN-4994.05.patch, 
> YARN-4994.06.patch
>
>
> In tests MiniYARNCluster is used with the following pattern:
> In try-catch block create a MiniYARNCluster instance and in finally block 
> close it.
> [Try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html]
>  is preferred since Java7 instead of the pattern above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4842) "yarn logs" command should not require the appOwner argument

2016-05-09 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-4842:
--
Attachment: YARN-4842.6.patch

Same patch, but with the error message improved.

Will commit it if Jenkins says okay.

> "yarn logs" command should not require the appOwner argument
> 
>
> Key: YARN-4842
> URL: https://issues.apache.org/jira/browse/YARN-4842
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ram Venkatesh
>Assignee: Xuan Gong
> Attachments: YARN-4842.1.patch, YARN-4842.2.patch, YARN-4842.3.patch, 
> YARN-4842.4.patch, YARN-4842.5.patch, YARN-4842.6.patch
>
>
> The yarn logs command is among the most common ways to troubleshoot yarn app 
> failures, especially by an admin.
> Currently if you run the command as a user different from the job owner, the 
> command will fail with a subtle message that it could not find the app under 
> the running user's name. This can be confusing especially to new admins.
> We can figure out the job owner from the app report returned by the RM or the 
> AHS, or, by looking for the app directory using a glob pattern, so in most 
> cases this error can be avoided.
> Question - are there scenarios where users will still need to specify the 
> -appOwner option?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4842) "yarn logs" command should not require the appOwner argument

2016-05-09 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15277065#comment-15277065
 ] 

Vinod Kumar Vavilapalli commented on YARN-4842:
---

The patch looks good to me. One minor change I'd like included
 - The user: " + priorityUser + "does not have permission to access -> "Guessed 
logs' owner is " + priorityUser + " and current user " + 
UserGroupInformation.getCurrentUser().getUserName() + " does not have 
permission to access"));

Will do this change myself and commit it if Jenkins says okay.

> "yarn logs" command should not require the appOwner argument
> 
>
> Key: YARN-4842
> URL: https://issues.apache.org/jira/browse/YARN-4842
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Ram Venkatesh
>Assignee: Xuan Gong
> Attachments: YARN-4842.1.patch, YARN-4842.2.patch, YARN-4842.3.patch, 
> YARN-4842.4.patch, YARN-4842.5.patch
>
>
> The yarn logs command is among the most common ways to troubleshoot yarn app 
> failures, especially by an admin.
> Currently if you run the command as a user different from the job owner, the 
> command will fail with a subtle message that it could not find the app under 
> the running user's name. This can be confusing especially to new admins.
> We can figure out the job owner from the app report returned by the RM or the 
> AHS, or, by looking for the app directory using a glob pattern, so in most 
> cases this error can be avoided.
> Question - are there scenarios where users will still need to specify the 
> -appOwner option?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3699) Decide if flow version should be part of row key or column

2016-05-09 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C resolved YARN-3699.
--
  Resolution: Information Provided
Release Note: 

I think we have enough information in the jira comments above and we have also 
made significant progress on the data model to conclude this jira (1st 
milestone is complete).
 
To summarize: we do not need the flow version as part of row key.


> Decide if  flow version should be part of row key or column
> ---
>
> Key: YARN-3699
> URL: https://issues.apache.org/jira/browse/YARN-3699
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>
> Based on discussions in YARN-3411 with [~djp], filing jira for continuing 
> discussion on putting the flow version in rowkey or column. 
> Either phoenix/hbase approach will update the jira with the conclusions..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5064) move the shell code out of hadoop-yarn

2016-05-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5064:
---
Summary: move the shell code out of hadoop-yarn  (was: Move the yarn shell 
scripts to their own module)

> move the shell code out of hadoop-yarn
> --
>
> Key: YARN-5064
> URL: https://issues.apache.org/jira/browse/YARN-5064
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: scripts, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> We need to move the shell code out of hadoop-yarn so that we can properly 
> build test infrastructure for it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5064) Move the yarn shell scripts to their own module

2016-05-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5064:
---
Summary: Move the yarn shell scripts to their own module  (was: Move the 
shell scripts to their own module)

> Move the yarn shell scripts to their own module
> ---
>
> Key: YARN-5064
> URL: https://issues.apache.org/jira/browse/YARN-5064
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: scripts, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> We need to move the shell code out of hadoop-yarn so that we can properly 
> build test infrastructure for it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5064) Move the shell scripts to their own module

2016-05-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276978#comment-15276978
 ] 

Allen Wittenauer commented on YARN-5064:


bq. To some central place? 

The yarn shell code is *already* centralized.  The problem is that the 
hadoop-yarn maven module isn't really built for running mvn test and getting a 
target directory with content anyone cares about.  

I want to either:

a) create hadoop-yarn-scripts or hadoop-yarn-misc or whatever
b) move this content to hadoop-yarn-common

Given that hadoop-yarn-site exists, it makes more sense to go with A to match 
the rest of yarn's module layout.

> Move the shell scripts to their own module
> --
>
> Key: YARN-5064
> URL: https://issues.apache.org/jira/browse/YARN-5064
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: scripts, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> We need to move the shell code out of hadoop-yarn so that we can properly 
> build test infrastructure for it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5064) Move the shell scripts to their own module

2016-05-09 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276967#comment-15276967
 ] 

Vinod Kumar Vavilapalli commented on YARN-5064:
---

To where? To some central place? Why can't we build a common shell-code testing 
infrastructure and use it to test the shell code in each of the individual 
modules instead of moving everything to a central place? It's better to keep 
the scripts in the same modules, closer to the rest of source.

> Move the shell scripts to their own module
> --
>
> Key: YARN-5064
> URL: https://issues.apache.org/jira/browse/YARN-5064
> Project: Hadoop YARN
>  Issue Type: Test
>  Components: scripts, test
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> We need to move the shell code out of hadoop-yarn so that we can properly 
> build test infrastructure for it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5064) Move the shell scripts to their own module

2016-05-09 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created YARN-5064:
--

 Summary: Move the shell scripts to their own module
 Key: YARN-5064
 URL: https://issues.apache.org/jira/browse/YARN-5064
 Project: Hadoop YARN
  Issue Type: Test
  Components: scripts, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


We need to move the shell code out of hadoop-yarn so that we can properly build 
test infrastructure for it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276868#comment-15276868
 ] 

Hadoop QA commented on YARN-4994:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 43s {color} 
| {color:red} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 3 new + 660 
unchanged - 3 fixed = 663 total (was 663) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 39s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 new + 669 
unchanged - 3 fixed = 672 total (was 672) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 56s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 29s 
{color} | {color:red} root: patch generated 5 new + 208 unchanged - 4 fixed = 
213 total (was 212) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 31s {color} 
| {color:red} hadoop-yarn-server-tests in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 18s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 40s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed with 
JDK 

[jira] [Commented] (YARN-4768) getAvailablePhysicalMemorySize can be inaccurate on linux

2016-05-09 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276849#comment-15276849
 ] 

Eric Payne commented on YARN-4768:
--

bq. "Inactive(file)" would seem more accurate but it's not available in all 
kernel versions. To keep things simple, maybe just use "Inactive(file)" if 
available, otherwise fallback to "Inactive".

Sounds reasonable. I'll take a look at the patch.

> getAvailablePhysicalMemorySize can be inaccurate on linux
> -
>
> Key: YARN-4768
> URL: https://issues.apache.org/jira/browse/YARN-4768
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0, 2.7.2
> Environment: Linux
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Attachments: YARN-4768.patch
>
>
> Algorithm currently uses "MemFree" + "Inactive" from /proc/meminfo
> "Inactive" may not be a very good indication of how much memory can be 
> readily freed because it contains both:
> - Pages mapped with MAP_SHARED|MAP_ANONYMOUS (regardless of whether they're 
> being actively accessed or not. Unclear to me why this is the case...)
> - Pages mapped MAP_PRIVATE|MAP_ANONYMOUS that have not been accessed recently
> Both of these types of pages probably shouldn't be considered "Available".
> "Inactive(file)" would seem more accurate but it's not available in all 
> kernel versions. To keep things simple, maybe just use "Inactive(file)" if 
> available, otherwise fallback to "Inactive".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5038) [YARN-3368] Application and Container pages shows wrong values when RM is stopped

2016-05-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276789#comment-15276789
 ] 

Sunil G commented on YARN-5038:
---

Sure. It ll be better for review and test.  I will close this ticket as a 
duplicate.

> [YARN-3368] Application and Container pages shows wrong values when RM is 
> stopped
> -
>
> Key: YARN-5038
> URL: https://issues.apache.org/jira/browse/YARN-5038
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
>
> Few minor issues to fix.
> - In Applications page, "Running Container" is shows as -1 when app is 
> finished.
> - In container page, "Finished Time" is showing 1970 as date by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5038) [YARN-3368] Application and Container pages shows wrong values when RM is stopped

2016-05-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276784#comment-15276784
 ] 

Wangda Tan commented on YARN-5038:
--

[~sunilg], could we merge this fix to YARN-5000?

> [YARN-3368] Application and Container pages shows wrong values when RM is 
> stopped
> -
>
> Key: YARN-5038
> URL: https://issues.apache.org/jira/browse/YARN-5038
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
>
> Few minor issues to fix.
> - In Applications page, "Running Container" is shows as -1 when app is 
> finished.
> - In container page, "Finished Time" is showing 1970 as date by default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5019) [YARN-3368] Change urls in new YARN ui from camel casing to hyphens

2016-05-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276773#comment-15276773
 ] 

Wangda Tan commented on YARN-5019:
--

Thanks [~sunilg], tried this on my local cluster, works correctly for me.

+1 to the patch, will commit it soon.

> [YARN-3368] Change urls in new YARN ui from camel casing to hyphens
> ---
>
> Key: YARN-5019
> URL: https://issues.apache.org/jira/browse/YARN-5019
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Varun Vasudev
>Assignee: Sunil G
> Attachments: YARN-5019-YARN-3368.1.patch
>
>
> There are a couple of reasons we should recommend avoiding camel casing in 
> urls -
> 1. Some web servers are case insensitive
> 2. Google suggests using hyphens - 
> https://support.google.com/webmasters/answer/76329



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5000) [YARN-3368] App attempt page is not loading when timeline server is not started

2016-05-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276763#comment-15276763
 ] 

Sunil G commented on YARN-5000:
---

Thank you [~leftnoteasy] for the comments. I ll update a patch here with these 
comments fixed. Now I am making some changes in pom.xml to copy the UI o/p 
folder under yarn/webapps. Will share a patch soon. 

> [YARN-3368] App attempt page is not loading when timeline server is not 
> started
> ---
>
> Key: YARN-5000
> URL: https://issues.apache.org/jira/browse/YARN-5000
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-5000.patch, 
> AppFinishedAndNoTimelineServer.png, AppRunningAndNoTimelineServer.png, 
> AppRunningAndNoTimelineServer_v2.png, YARN-5000-YARN-3368.1.patch, 
> YARN-5000-YARN-3368.2.patch, YARN-5000-YARN-3368.3.patch, 
> YARN-5000-YARN-3368.4.patch, YARN-5000-YARN-3368.5.patch, screenshot-1.png
>
>
> If timeline server is not started, app attempt page is not getting loaded.
> In new web-ui, yarnContainer route is tightly coupled with both RM and 
> Timeline server. And if one of server is not up, page will not load. If 
> timeline server is not up, container information from RM is to be displayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5000) [YARN-3368] App attempt page is not loading when timeline server is not started

2016-05-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276756#comment-15276756
 ] 

Wangda Tan commented on YARN-5000:
--

A couple of comments about implementation: 

1) Instead of following logic:
{code}
75  if (url == undefined) {
76url = "Not Available";
77  }
78  return url != "Not Available";
{code}

Can we just return {{url != undefined}} ?

2) yarn-app.js is still using dummy, could we change it to empty array 

3) A coupld of tabs found in your patch, could you double check it?

Will try this patch again after YARN-4515 get in 

> [YARN-3368] App attempt page is not loading when timeline server is not 
> started
> ---
>
> Key: YARN-5000
> URL: https://issues.apache.org/jira/browse/YARN-5000
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-5000.patch, 
> AppFinishedAndNoTimelineServer.png, AppRunningAndNoTimelineServer.png, 
> AppRunningAndNoTimelineServer_v2.png, YARN-5000-YARN-3368.1.patch, 
> YARN-5000-YARN-3368.2.patch, YARN-5000-YARN-3368.3.patch, 
> YARN-5000-YARN-3368.4.patch, YARN-5000-YARN-3368.5.patch, screenshot-1.png
>
>
> If timeline server is not started, app attempt page is not getting loaded.
> In new web-ui, yarnContainer route is tightly coupled with both RM and 
> Timeline server. And if one of server is not up, page will not load. If 
> timeline server is not up, container information from RM is to be displayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3362) Add node label usage in RM CapacityScheduler web UI

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276736#comment-15276736
 ] 

Hadoop QA commented on YARN-3362:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 
54s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
27s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} branch-2.7 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 2s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in branch-2.7 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 23s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 patch generated 50 new + 687 unchanged - 43 fixed = 737 total (was 730) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3610 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 1m 22s 
{color} | {color:red} The patch has 497 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 42s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_91. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 14s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 131m 19s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_91 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | 

[jira] [Commented] (YARN-5000) [YARN-3368] App attempt page is not loading when timeline server is not started

2016-05-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276734#comment-15276734
 ] 

Sunil G commented on YARN-5000:
---

Sorry, yes. I meant YARN-4515.

> [YARN-3368] App attempt page is not loading when timeline server is not 
> started
> ---
>
> Key: YARN-5000
> URL: https://issues.apache.org/jira/browse/YARN-5000
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-5000.patch, 
> AppFinishedAndNoTimelineServer.png, AppRunningAndNoTimelineServer.png, 
> AppRunningAndNoTimelineServer_v2.png, YARN-5000-YARN-3368.1.patch, 
> YARN-5000-YARN-3368.2.patch, YARN-5000-YARN-3368.3.patch, 
> YARN-5000-YARN-3368.4.patch, YARN-5000-YARN-3368.5.patch, screenshot-1.png
>
>
> If timeline server is not started, app attempt page is not getting loaded.
> In new web-ui, yarnContainer route is tightly coupled with both RM and 
> Timeline server. And if one of server is not up, page will not load. If 
> timeline server is not up, container information from RM is to be displayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5000) [YARN-3368] App attempt page is not loading when timeline server is not started

2016-05-09 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276732#comment-15276732
 ] 

Wangda Tan commented on YARN-5000:
--

Thanks,

bq.  With YARN-4514's locationType change (to hash), this issue will not happen.
Is this a typo? Did you mean YARN-4515?  

> [YARN-3368] App attempt page is not loading when timeline server is not 
> started
> ---
>
> Key: YARN-5000
> URL: https://issues.apache.org/jira/browse/YARN-5000
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: 0001-YARN-5000.patch, 
> AppFinishedAndNoTimelineServer.png, AppRunningAndNoTimelineServer.png, 
> AppRunningAndNoTimelineServer_v2.png, YARN-5000-YARN-3368.1.patch, 
> YARN-5000-YARN-3368.2.patch, YARN-5000-YARN-3368.3.patch, 
> YARN-5000-YARN-3368.4.patch, YARN-5000-YARN-3368.5.patch, screenshot-1.png
>
>
> If timeline server is not started, app attempt page is not getting loaded.
> In new web-ui, yarnContainer route is tightly coupled with both RM and 
> Timeline server. And if one of server is not up, page will not load. If 
> timeline server is not up, container information from RM is to be displayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4844) Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource

2016-05-09 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4844:
-
Attachment: YARN-4844.7.patch

> Add getMemoryLong/getVirtualCoreLong to o.a.h.y.api.records.Resource
> 
>
> Key: YARN-4844
> URL: https://issues.apache.org/jira/browse/YARN-4844
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-4844.1.patch, YARN-4844.2.patch, YARN-4844.3.patch, 
> YARN-4844.4.patch, YARN-4844.5.patch, YARN-4844.6.patch, YARN-4844.7.patch
>
>
> We use int32 for memory now, if a cluster has 10k nodes, each node has 210G 
> memory, we will get a negative total cluster memory.
> And another case that easier overflows int32 is: we added all pending 
> resources of running apps to cluster's total pending resources. If a 
> problematic app requires too much resources (let's say 1M+ containers, each 
> of them has 3G containers), int32 will be not enough.
> Even if we can cap each app's pending request, we cannot handle the case that 
> there're many running apps, each of them has capped but still significant 
> numbers of pending resources.
> So we may possibly need to add getMemoryLong/getVirtualCoreLong to 
> o.a.h.y.api.records.Resource.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5045) hbase unit tests fail due to dependency issues

2016-05-09 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276610#comment-15276610
 ] 

Sangjin Lee commented on YARN-5045:
---

Yes, I believe CHANGES.txt has been removed altogether, so we cannot update it 
even if we wanted to.

I'm little unsure what it means for branch commits. I'm not sure if there was a 
discussion on the implication for the branch commits. When we do the actual 
merge to trunk, I don't think that the commit message of that commit will carry 
any info on the branch commits. Are you aware of any discussion? Is the idea 
that the branch will be preserved so that people can see the commit activities 
there? [~vinodkv]?

> hbase unit tests fail due to dependency issues
> --
>
> Key: YARN-5045
> URL: https://issues.apache.org/jira/browse/YARN-5045
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Blocker
> Fix For: YARN-2928
>
> Attachments: YARN-5045-YARN-2928.01.patch, 
> YARN-5045-YARN-2928.02.patch, YARN-5045-YARN-2928.03.patch, 
> YARN-5045-YARN-2928.poc.patch
>
>
> After the 5/4 rebase, the hbase unit tests in the timeline service project 
> are failing:
> {noformat}
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
>   Time elapsed: 5.103 sec  <<< ERROR!
> java.io.IOException: Shutting down
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>   at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
>   at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
>   at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
>   at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:104)
>   at 
> org.apache.hadoop.hbase.http.HttpServer$Builder.build(HttpServer.java:345)
>   at org.apache.hadoop.hbase.http.InfoServer.(InfoServer.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1697)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:550)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:333)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
>   at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.createMasterThread(JVMClusterUtil.java:139)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.addMaster(LocalHBaseCluster.java:217)
>   at 
> org.apache.hadoop.hbase.LocalHBaseCluster.(LocalHBaseCluster.java:153)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:213)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster.(MiniHBaseCluster.java:93)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:978)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:938)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:812)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:806)
>   at 
> org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:750)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage.setup(TestTimelineReaderWebServicesHBaseStorage.java:87)
> {noformat}
> The root cause is that the hbase mini server depends on hadoop common's 
> {{MetricsServlet}} which has been removed in the trunk (HADOOP-12504):
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/hadoop/metrics/MetricsServlet
> at 
> org.apache.hadoop.hbase.http.HttpServer.addDefaultServlets(HttpServer.java:677)
> at 
> org.apache.hadoop.hbase.http.HttpServer.initializeWebServer(HttpServer.java:546)
> at org.apache.hadoop.hbase.http.HttpServer.(HttpServer.java:500)
> at 

[jira] [Commented] (YARN-4766) NM should not aggregate logs older than the retention policy

2016-05-09 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276598#comment-15276598
 ] 

Haibo Chen commented on YARN-4766:
--

findbug issue is unrelated to my patch

> NM should not aggregate logs older than the retention policy
> 
>
> Key: YARN-4766
> URL: https://issues.apache.org/jira/browse/YARN-4766
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation, nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: yarn4766.001.patch, yarn4766.002.patch, 
> yarn4766.003.patch, yarn4766.004.patch, yarn4766.004.patch
>
>
> When a log aggregation fails on the NM the information is for the attempt is 
> kept in the recovery DB. Log aggregation can fail for multiple reasons which 
> are often related to HDFS space or permissions.
> On restart the recovery DB is read and if an application attempt needs its 
> logs aggregated, the files are scheduled for aggregation without any checks. 
> The log files could be older than the retention limit in which case we should 
> not aggregate them but immediately mark them for deletion from the local file 
> system.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4913) Yarn logs should take a -out option to write to a directory

2016-05-09 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-4913:

Attachment: YARN-4913.4.patch

> Yarn logs should take a -out option to write to a directory
> ---
>
> Key: YARN-4913
> URL: https://issues.apache.org/jira/browse/YARN-4913
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-4913.1.patch, YARN-4913.2.patch, YARN-4913.3.patch, 
> YARN-4913.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-05-09 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated YARN-4994:
---
Attachment: YARN-4994.05.patch

[^YARN-4994.05.patch] for eliminating checkstyle warning that related to patch.

> Use MiniYARNCluster with try-with-resources in tests
> 
>
> Key: YARN-4994
> URL: https://issues.apache.org/jira/browse/YARN-4994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch, YARN-4994.04.patch, YARN-4994.05.patch
>
>
> In tests MiniYARNCluster is used with the following pattern:
> In try-catch block create a MiniYARNCluster instance and in finally block 
> close it.
> [Try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html]
>  is preferred since Java7 instead of the pattern above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3362) Add node label usage in RM CapacityScheduler web UI

2016-05-09 Thread Naganarasimha Garla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276449#comment-15276449
 ] 

Naganarasimha Garla commented on YARN-3362:
---

Sure will review as it's also close can commit it too...



> Add node label usage in RM CapacityScheduler web UI
> ---
>
> Key: YARN-3362
> URL: https://issues.apache.org/jira/browse/YARN-3362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, resourcemanager, webapp
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Fix For: 2.8.0
>
> Attachments: 2015.05.06 Folded Queues.png, 2015.05.06 Queue 
> Expanded.png, 2015.05.07_3362_Queue_Hierarchy.png, 
> 2015.05.10_3362_Queue_Hierarchy.png, 2015.05.12_3362_Queue_Hierarchy.png, 
> AppInLabelXnoStatsInSchedPage.png, CSWithLabelsView.png, 
> No-space-between-Active_user_info-and-next-queues.png, Screen Shot 2015-04-29 
> at 11.42.17 AM.png, YARN-3362-branch-2.7.002.patch, 
> YARN-3362-branch-2.7.003.patch, YARN-3362-branch-2.7.004.patch, 
> YARN-3362.20150428-3-modified.patch, YARN-3362.20150428-3.patch, 
> YARN-3362.20150506-1.patch, YARN-3362.20150507-1.patch, 
> YARN-3362.20150510-1.patch, YARN-3362.20150511-1.patch, 
> YARN-3362.20150512-1.patch, capacity-scheduler.xml
>
>
> We don't have node label usage in RM CapacityScheduler web UI now, without 
> this, user will be hard to understand what happened to nodes have labels 
> assign to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3362) Add node label usage in RM CapacityScheduler web UI

2016-05-09 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-3362:
-
Attachment: YARN-3362-branch-2.7.004.patch

[~Naganarasimha], attaching YARN-3362-branch-2.7.004.patch with another 
checkstyle change correcting the order of {{final}} and {{protected}}.

Once pre-commit build comes back, can you please review?

> Add node label usage in RM CapacityScheduler web UI
> ---
>
> Key: YARN-3362
> URL: https://issues.apache.org/jira/browse/YARN-3362
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler, resourcemanager, webapp
>Reporter: Wangda Tan
>Assignee: Naganarasimha G R
> Fix For: 2.8.0
>
> Attachments: 2015.05.06 Folded Queues.png, 2015.05.06 Queue 
> Expanded.png, 2015.05.07_3362_Queue_Hierarchy.png, 
> 2015.05.10_3362_Queue_Hierarchy.png, 2015.05.12_3362_Queue_Hierarchy.png, 
> AppInLabelXnoStatsInSchedPage.png, CSWithLabelsView.png, 
> No-space-between-Active_user_info-and-next-queues.png, Screen Shot 2015-04-29 
> at 11.42.17 AM.png, YARN-3362-branch-2.7.002.patch, 
> YARN-3362-branch-2.7.003.patch, YARN-3362-branch-2.7.004.patch, 
> YARN-3362.20150428-3-modified.patch, YARN-3362.20150428-3.patch, 
> YARN-3362.20150506-1.patch, YARN-3362.20150507-1.patch, 
> YARN-3362.20150510-1.patch, YARN-3362.20150511-1.patch, 
> YARN-3362.20150512-1.patch, capacity-scheduler.xml
>
>
> We don't have node label usage in RM CapacityScheduler web UI now, without 
> this, user will be hard to understand what happened to nodes have labels 
> assign to it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276410#comment-15276410
 ] 

Hadoop QA commented on YARN-4994:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 43s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 24s {color} 
| {color:red} root-jdk1.8.0_91 with JDK v1.8.0_91 generated 3 new + 660 
unchanged - 3 fixed = 663 total (was 663) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 11s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 3 new + 669 
unchanged - 3 fixed = 672 total (was 672) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 25s 
{color} | {color:red} root: patch generated 12 new + 207 unchanged - 4 fixed = 
219 total (was 211) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 26s {color} 
| {color:red} hadoop-yarn-server-tests in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 9s {color} 
| {color:red} hadoop-yarn-client in the patch failed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 46s 
{color} | {color:green} hadoop-mapreduce-client-app in the patch passed with 
JDK v1.8.0_91. {color} |
| 

[jira] [Commented] (YARN-4747) AHS error 500 due to NPE when container start event is missing

2016-05-09 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276393#comment-15276393
 ] 

Varun Saxena commented on YARN-4747:


Thanks [~jlowe] for the review and commit.

> AHS error 500 due to NPE when container start event is missing
> --
>
> Key: YARN-4747
> URL: https://issues.apache.org/jira/browse/YARN-4747
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Varun Saxena
> Fix For: 2.8.0, 2.7.3
>
> Attachments: YARN-4747.01.patch
>
>
> Saw an error 500 due to a NullPointerException caused by a missing host for 
> an AM container.  Stacktrace to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5062) the command "yarn rmadmin -replaceLabelsOnNode " dosn't check whether the host is valid

2016-05-09 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15276386#comment-15276386
 ] 

Sunil G commented on YARN-5062:
---

Hi [~YongYongLiam]
Looks like YARN-4855 is already try to address a similar issue. There were few 
discussion also regarding same topic. Marking this as duplicate, pls reopen if 
YARN-4855 is not the intended JIRA to track.

> the command "yarn rmadmin -replaceLabelsOnNode " dosn't check whether the 
> host is valid
> ---
>
> Key: YARN-5062
> URL: https://issues.apache.org/jira/browse/YARN-5062
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
> Environment: suse11sp3   hadoop 2.7.2
>Reporter: Liam
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5062) the command "yarn rmadmin -replaceLabelsOnNode " dosn't check whether the host is valid

2016-05-09 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5062.
---
Resolution: Duplicate

> the command "yarn rmadmin -replaceLabelsOnNode " dosn't check whether the 
> host is valid
> ---
>
> Key: YARN-5062
> URL: https://issues.apache.org/jira/browse/YARN-5062
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
> Environment: suse11sp3   hadoop 2.7.2
>Reporter: Liam
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5063) Fail to launch AM continuously on a lost NM

2016-05-09 Thread Jun Gong (JIRA)
Jun Gong created YARN-5063:
--

 Summary: Fail to launch AM continuously on a lost NM
 Key: YARN-5063
 URL: https://issues.apache.org/jira/browse/YARN-5063
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Reporter: Jun Gong
Assignee: Jun Gong


If a NM node shuts down, RM will not mark it as LOST until liveness monitor 
finds it timeout. However before that, RM might continuously allocate AM on 
that NM.

We found this case in our cluster: RM continuously allocated a same AM on a 
lost NM before RM found it lost, and AMLauncher always failed because it could 
not connect to the lost NM. To solve the problem, we could add the NM to AM 
blacklist if RM failed to launch it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4994) Use MiniYARNCluster with try-with-resources in tests

2016-05-09 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated YARN-4994:
---
Attachment: YARN-4994.04.patch

Hi [~jzhuge],

I have done the recommended changes except for TestHadoopArchiveLogsRunner. 
Here fs and yarnCluster use the same config object in different time. In 
addition fs object is reused later. So it is not an obvious change.
bq. Could you rename the patches from HDFS-10287.* to YARN-4994.*?
The existing Hadoop QA comments are already pointing to HDFS-10287*. I uploaded 
the new patch with correct name. Do you want me to rename the old ones?

> Use MiniYARNCluster with try-with-resources in tests
> 
>
> Key: YARN-4994
> URL: https://issues.apache.org/jira/browse/YARN-4994
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Trivial
> Fix For: 2.7.0
>
> Attachments: HDFS-10287.01.patch, HDFS-10287.02.patch, 
> HDFS-10287.03.patch, YARN-4994.04.patch
>
>
> In tests MiniYARNCluster is used with the following pattern:
> In try-catch block create a MiniYARNCluster instance and in finally block 
> close it.
> [Try-with-resources|https://docs.oracle.com/javase/tutorial/essential/exceptions/tryResourceClose.html]
>  is preferred since Java7 instead of the pattern above.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5015) Unify restart policies across AM and container restarts

2016-05-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15275996#comment-15275996
 ] 

Hadoop QA commented on YARN-5015:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
51s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 6s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 34s 
{color} | {color:green} trunk passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 41s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: patch generated 2 new + 
404 unchanged - 1 fixed = 406 total (was 405) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 4s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hadoop-yarn-api in the patch passed with JDK v1.8.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 3s 
{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.8.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 1s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch