[jira] [Updated] (YARN-8122) Component health threshold monitor

2018-04-17 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-8122:

Attachment: YARN-8122.003.patch

> Component health threshold monitor
> --
>
> Key: YARN-8122
> URL: https://issues.apache.org/jira/browse/YARN-8122
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-8122.001.patch, YARN-8122.002.patch, 
> YARN-8122.003.patch, YARN-8122.draft.patch
>
>
> Slider supported component health threshold monitoring with SLIDER-1246. It 
> would be good to have this feature for YARN Service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8122) Component health threshold monitor

2018-04-17 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441934#comment-16441934
 ] 

Gour Saha commented on YARN-8122:
-

Thank you [~billie.rinaldi] for your review and comments. I have incorporated 
all your comments in patch 003.

> Component health threshold monitor
> --
>
> Key: YARN-8122
> URL: https://issues.apache.org/jira/browse/YARN-8122
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-8122.001.patch, YARN-8122.002.patch, 
> YARN-8122.draft.patch
>
>
> Slider supported component health threshold monitoring with SLIDER-1246. It 
> would be good to have this feature for YARN Service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7966) Remove method AllocationConfiguration#getQueueAcl and related unit tests

2018-04-17 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7966:
---
Summary: Remove method AllocationConfiguration#getQueueAcl and related unit 
tests  (was: Remove AllocationConfiguration#getQueueAcl and related unit test)

> Remove method AllocationConfiguration#getQueueAcl and related unit tests
> 
>
> Key: YARN-7966
> URL: https://issues.apache.org/jira/browse/YARN-7966
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Minor
>  Labels: newbie++
> Attachments: YARN-7966.001.patch, YARN-7966.002.patch
>
>
> AllocationConfiguration#getQueueAcl isn't needed any more after YARN-4997. We 
> should remove it and its related unit test. All its logic is reimplemented in 
> AllocationFileLoaderService#getDefaultPermissions. class 
> AllocationConfiguration doesn't have any API annotation, it is considered as 
> private. So it is OK to remove its public method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8148) Update decimal values for queue capacities shown on queue status cli

2018-04-17 Thread Prabhu Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-8148:

Attachment: YARN-8148.1.patch

> Update decimal values for queue capacities shown on queue status cli
> 
>
> Key: YARN-8148
> URL: https://issues.apache.org/jira/browse/YARN-8148
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 3.0.0
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
> Attachments: YARN-8148.1.patch
>
>
> Capacities are shown with two decimal values in RM UI as part of YARN-6182. 
> The queue status cli are still showing one decimal value.
> {code}
> [root@bigdata3 yarn]# yarn queue -status default
> Queue Information : 
> Queue Name : default
>   State : RUNNING
>   Capacity : 69.9%
>   Current Capacity : .0%
>   Maximum Capacity : 70.0%
>   Default Node Label expression : 
>   Accessible Node Labels : *
>   Preemption : enabled
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8145) yarn rmadmin -getGroups doesn't return updated groups for user

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441924#comment-16441924
 ] 

genericqa commented on YARN-8145:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 67 unchanged - 1 fixed = 69 total (was 68) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m  5s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 43s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMEmbeddedElector 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8145 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919527/YARN-8145.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1e587a0cdfd0 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4313e7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20382/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-7966) Remove AllocationConfiguration#getQueueAcl and related unit test

2018-04-17 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441914#comment-16441914
 ] 

Yufei Gu commented on YARN-7966:


+1 for the last patch. Will check in later.

> Remove AllocationConfiguration#getQueueAcl and related unit test
> 
>
> Key: YARN-7966
> URL: https://issues.apache.org/jira/browse/YARN-7966
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Minor
>  Labels: newbie++
> Attachments: YARN-7966.001.patch, YARN-7966.002.patch
>
>
> AllocationConfiguration#getQueueAcl isn't needed any more after YARN-4997. We 
> should remove it and its related unit test. All its logic is reimplemented in 
> AllocationFileLoaderService#getDefaultPermissions. class 
> AllocationConfiguration doesn't have any API annotation, it is considered as 
> private. So it is OK to remove its public method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7974) Allow updating application tracking url after registration

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441891#comment-16441891
 ] 

genericqa commented on YARN-7974:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
54s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
44s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  7m 
30s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 26s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 11 new + 385 unchanged - 0 fixed = 396 total (was 385) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
45s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
13s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 53s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 28m 
42s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 12s{color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}206m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (YARN-7966) Remove AllocationConfiguration#getQueueAcl and related unit test

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441880#comment-16441880
 ] 

genericqa commented on YARN-7966:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 65m 
48s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7966 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919521/YARN-7966.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0f8ff1ef8917 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4313e7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20381/testReport/ |
| Max. process+thread count | 871 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20381/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove AllocationConfiguration#getQueueAcl and 

[jira] [Commented] (YARN-8111) Simplify PlacementConstraints API by removing allocationTagToIntraApp

2018-04-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441856#comment-16441856
 ] 

Weiwei Yang commented on YARN-8111:
---

Hi [~kkaranasos]

As YARN-7142 is resolved, this one is no longer blocked for commit. I tried it 
v2 patch can be cleanly applied to both trunk and branch-3.1. Do you think we 
are ready to get this in? Thanks

> Simplify PlacementConstraints API by removing allocationTagToIntraApp
> -
>
> Key: YARN-8111
> URL: https://issues.apache.org/jira/browse/YARN-8111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Minor
> Attachments: YARN-8111.001.patch, YARN-8111.002.patch
>
>
> # Per discussion in YARN-8013, we agree to disallow null value for namespace 
> and default it to SELF.
>  # Remove PlacementConstraints#allocationTagToIntraApp



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8145) yarn rmadmin -getGroups doesn't return updated groups for user

2018-04-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441836#comment-16441836
 ] 

Sunil G commented on YARN-8145:
---

Fixing test issue

> yarn rmadmin -getGroups doesn't return updated groups for user
> --
>
> Key: YARN-8145
> URL: https://issues.apache.org/jira/browse/YARN-8145
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8145.001.patch, YARN-8145.002.patch
>
>
> Post YARN-8062, we sees that still some cache clearing issue for yarn rmadmin 
> -getGroups which causes getGroup results in stale data for sometime. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8145) yarn rmadmin -getGroups doesn't return updated groups for user

2018-04-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8145:
--
Attachment: YARN-8145.002.patch

> yarn rmadmin -getGroups doesn't return updated groups for user
> --
>
> Key: YARN-8145
> URL: https://issues.apache.org/jira/browse/YARN-8145
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8145.001.patch, YARN-8145.002.patch
>
>
> Post YARN-8062, we sees that still some cache clearing issue for yarn rmadmin 
> -getGroups which causes getGroup results in stale data for sometime. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8161) ServiceState FLEX should be removed

2018-04-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441821#comment-16441821
 ] 

Eric Yang commented on YARN-8161:
-

State is required, otherwise, it is hard to figure out what operation will be 
performed.  Users may submit json that look like this:

{code}
{
  "number_of_containers": 3
  "state" : "STOP"
}
{code}

This may become ambiguous and non-deterministic when code is modified by other 
developers that were not initially involved in this project.

> ServiceState FLEX should be removed
> ---
>
> Key: YARN-8161
> URL: https://issues.apache.org/jira/browse/YARN-8161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Priority: Major
>
> ServiceState FLEX is not required to trigger flex up/down of containers and 
> should be removed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8161) ServiceState FLEX should be removed

2018-04-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439811#comment-16439811
 ] 

Eric Yang edited comment on YARN-8161 at 4/18/18 3:04 AM:
--

[~gsaha] Thank you for filing this issue.  The current REST API performs 
operations such as:

{code}
curl -i --negotiate -u : -X PUT -H "Content-Type: application/json" 
-d@/tmp/flex.json 
http://eyang-2.openstacklocal:8088/app/v1/services/q1/components/ping
{code}

The payload of JSON shows:
{code}
{
"number_of_containers" : 3,
"state" : "FLEXING"
}
{code}

The state is in progressive tense.  There is no clear indicator when querying 
GET status API, whether AM is actually performing the operation, or the state 
was set by user who issued the operation request.

It would be nice to enhance the API to be more responsive by separating user 
requested operation and state change. 
 For example, the pay load could be:
{code}
{
"number_of_containers" : 3,
"state" : "FLEX"
}
{code}

Application Master code will change to FLEXING when operation is being worked 
on.  GET status API is invoked.
{code}
...
{
"number_of_containers" : 3,
"state" : "FLEXING"
}
...
{code}

When operation is finished, it reaches STABLE state:
{code}
...
{
"number_of_containers" : 3,
"state" : "STABLE"
}
...
{code}

The state transition provides better user feedback between state being 
triggered, or server is currently working on the operation.  

The original code was written such that -flex operation is in cli code, which 
switch to FLEXING when it reaches REST API.  For third party developer that 
does not rely on cli code base, this can be confusing.  I think more feedback 
from the community, can help to decide to simplify code base by removing 
present tense state transition, or adding present tense transition for better 
user feedback.


was (Author: eyang):
[~gsaha] Thank you for filing this issue.  The current REST API performs 
operations such as:

{code}
curl -i --negotiate -u : -X PUT -H "Content-Type: application/json" 
-d@/tmp/flex.json 
http://eyang-2.openstacklocal:8088/app/v1/services/q1/components/ping
{code}

The payload of JSON shows:
{code}
{
"name" : "ping",
"number_of_containers" : 3,
"run_privileged_container" : false,
"state" : "FLEXING"
}
{code}

The state is in progressive tense.  There is no clear indicator when querying 
GET status API, whether AM is actually performing the operation, or the state 
was set by user who issued the operation request.

It would be nice to enhance the API to be more responsive by separating user 
requested operation and state change. 
 For example, the pay load could be:
{code}
{
"name" : "ping",
"number_of_containers" : 3,
"run_privileged_container" : false,
"state" : "FLEX"
}
{code}

Application Master code will change to FLEXING when operation is being worked 
on.  GET status API is invoked.
{code}
...
{
"name" : "ping",
"number_of_containers" : 3,
"run_privileged_container" : false,
"state" : "FLEXING"
}
...
{code}

When operation is finished, it reaches STABLE state:
{code}
...
{
"name" : "ping",
"number_of_containers" : 3,
"run_privileged_container" : false,
"state" : "STABLE"
}
...
{code}

The state transition provides better user feedback between state being 
triggered, or server is currently working on the operation.  

The original code was written such that -flex operation is in cli code, which 
switch to FLEXING when it reaches REST API.  For third party developer that 
does not rely on cli code base, this can be confusing.  I think more feedback 
from the community, can help to decide to simplify code base by removing 
present tense state transition, or adding present tense transition for better 
user feedback.

> ServiceState FLEX should be removed
> ---
>
> Key: YARN-8161
> URL: https://issues.apache.org/jira/browse/YARN-8161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Priority: Major
>
> ServiceState FLEX is not required to trigger flex up/down of containers and 
> should be removed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7966) Remove AllocationConfiguration#getQueueAcl and related unit test

2018-04-17 Thread Sen Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441783#comment-16441783
 ] 

Sen Zhao commented on YARN-7966:


Submit patch 002 to remove unused imports.

> Remove AllocationConfiguration#getQueueAcl and related unit test
> 
>
> Key: YARN-7966
> URL: https://issues.apache.org/jira/browse/YARN-7966
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Minor
>  Labels: newbie++
> Attachments: YARN-7966.001.patch, YARN-7966.002.patch
>
>
> AllocationConfiguration#getQueueAcl isn't needed any more after YARN-4997. We 
> should remove it and its related unit test. All its logic is reimplemented in 
> AllocationFileLoaderService#getDefaultPermissions. class 
> AllocationConfiguration doesn't have any API annotation, it is considered as 
> private. So it is OK to remove its public method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7966) Remove AllocationConfiguration#getQueueAcl and related unit test

2018-04-17 Thread Sen Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sen Zhao updated YARN-7966:
---
Attachment: YARN-7966.002.patch

> Remove AllocationConfiguration#getQueueAcl and related unit test
> 
>
> Key: YARN-7966
> URL: https://issues.apache.org/jira/browse/YARN-7966
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Minor
>  Labels: newbie++
> Attachments: YARN-7966.001.patch, YARN-7966.002.patch
>
>
> AllocationConfiguration#getQueueAcl isn't needed any more after YARN-4997. We 
> should remove it and its related unit test. All its logic is reimplemented in 
> AllocationFileLoaderService#getDefaultPermissions. class 
> AllocationConfiguration doesn't have any API annotation, it is considered as 
> private. So it is OK to remove its public method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441780#comment-16441780
 ] 

genericqa commented on YARN-8108:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
53s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
17s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8108 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919508/YARN-8108.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux cdcaff5e65e5 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e4313e7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20380/testReport/ |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20380/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |



[jira] [Commented] (YARN-7654) Support ENTRY_POINT for docker container

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441738#comment-16441738
 ] 

genericqa commented on YARN-7654:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 11s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 110 unchanged - 0 fixed = 113 total (was 110) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 53s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
32s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
1s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7654 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919503/YARN-7654.015.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 8e1a4f794b0d 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build 

[jira] [Updated] (YARN-7974) Allow updating application tracking url after registration

2018-04-17 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-7974:

Description: 
Normally an application's tracking url is set on AM registration. We have a use 
case for updating the tracking url after registration (e.g. the UI is hosted on 
one of the containers).

Approach is for AM to update tracking url on heartbeat to RM, and add related 
API in AMRMClient.

  was:
Normally an application's tracking url is set on AM registration. We have a use 
case for updating the tracking url after registration (e.g. the UI is hosted on 
one of the containers).

Currently we added a {{updateTrackingUrl}} API to ApplicationClientProtocol.

We'll post the patch soon, assuming there are no issues with this.


> Allow updating application tracking url after registration
> --
>
> Key: YARN-7974
> URL: https://issues.apache.org/jira/browse/YARN-7974
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-7974.001.patch, YARN-7974.002.patch, 
> YARN-7974.003.patch
>
>
> Normally an application's tracking url is set on AM registration. We have a 
> use case for updating the tracking url after registration (e.g. the UI is 
> hosted on one of the containers).
> Approach is for AM to update tracking url on heartbeat to RM, and add related 
> API in AMRMClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7974) Allow updating application tracking url after registration

2018-04-17 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441731#comment-16441731
 ] 

Jonathan Hung commented on YARN-7974:
-

Attaching 003 patch which
 * Uses AM heartbeat to update tracking url
 * Stores updated tracking url to state store

> Allow updating application tracking url after registration
> --
>
> Key: YARN-7974
> URL: https://issues.apache.org/jira/browse/YARN-7974
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-7974.001.patch, YARN-7974.002.patch, 
> YARN-7974.003.patch
>
>
> Normally an application's tracking url is set on AM registration. We have a 
> use case for updating the tracking url after registration (e.g. the UI is 
> hosted on one of the containers).
> Currently we added a {{updateTrackingUrl}} API to ApplicationClientProtocol.
> We'll post the patch soon, assuming there are no issues with this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7974) Allow updating application tracking url after registration

2018-04-17 Thread Jonathan Hung (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-7974:

Attachment: YARN-7974.003.patch

> Allow updating application tracking url after registration
> --
>
> Key: YARN-7974
> URL: https://issues.apache.org/jira/browse/YARN-7974
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-7974.001.patch, YARN-7974.002.patch, 
> YARN-7974.003.patch
>
>
> Normally an application's tracking url is set on AM registration. We have a 
> use case for updating the tracking url after registration (e.g. the UI is 
> hosted on one of the containers).
> Currently we added a {{updateTrackingUrl}} API to ApplicationClientProtocol.
> We'll post the patch soon, assuming there are no issues with this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-04-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441729#comment-16441729
 ] 

Eric Yang commented on YARN-8108:
-

Patch 001 is a brute force approach to blanket the entire RM server with 
RMAuthenticationFilter.  This seems to work and not trigger code path 
registered by proxyserver.


> RM metrics rest API throws GSSException in kerberized environment
> -
>
> Key: YARN-8108
> URL: https://issues.apache.org/jira/browse/YARN-8108
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kshitij Badani
>Priority: Major
> Attachments: YARN-8108.001.patch
>
>
> Test is trying to pull up metrics data from SHS after kiniting as 'test_user'
> It is throwing GSSException as follows
> {code:java}
> b2b460b80713|RUNNING: curl --silent -k -X GET -D 
> /hwqe/hadoopqe/artifacts/tmp-94845 --negotiate -u : 
> http://rm_host:8088/proxy/application_1518674952153_0070/metrics/json2018-02-15
>  07:15:48,757|INFO|MainThread|machine.py:194 - 
> run()||GUID=fc5a3266-28f8-4eed-bae2-b2b460b80713|Exit Code: 0
> 2018-02-15 07:15:48,758|INFO|MainThread|spark.py:1757 - 
> getMetricsJsonData()|metrics:
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /proxy/application_1518674952153_0070/metrics/json. 
> Reason:
>  GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {code}
> Rootcausing : proxyserver on RM can't be supported for Kerberos enabled 
> cluster because AuthenticationFilter is applied twice in Hadoop code (once in 
> httpServer2 for RM, and another instance from AmFilterInitializer for proxy 
> server). This will require code changes to hadoop-yarn-server-web-proxy 
> project



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6675) Add NM support to launch opportunistic containers based on overallocation

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441726#comment-16441726
 ] 

genericqa commented on YARN-6675:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
47s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 6 new + 232 unchanged - 1 fixed = 238 total (was 233) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 27s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerWithOverAllocation
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-6675 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919504/YARN-6675-YARN-1011.00.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5074ef3df833 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-1011 / 72534c3 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20377/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 

[jira] [Updated] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-04-17 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8108:

Attachment: YARN-8108.001.patch

> RM metrics rest API throws GSSException in kerberized environment
> -
>
> Key: YARN-8108
> URL: https://issues.apache.org/jira/browse/YARN-8108
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kshitij Badani
>Priority: Major
> Attachments: YARN-8108.001.patch
>
>
> Test is trying to pull up metrics data from SHS after kiniting as 'test_user'
> It is throwing GSSException as follows
> {code:java}
> b2b460b80713|RUNNING: curl --silent -k -X GET -D 
> /hwqe/hadoopqe/artifacts/tmp-94845 --negotiate -u : 
> http://rm_host:8088/proxy/application_1518674952153_0070/metrics/json2018-02-15
>  07:15:48,757|INFO|MainThread|machine.py:194 - 
> run()||GUID=fc5a3266-28f8-4eed-bae2-b2b460b80713|Exit Code: 0
> 2018-02-15 07:15:48,758|INFO|MainThread|spark.py:1757 - 
> getMetricsJsonData()|metrics:
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /proxy/application_1518674952153_0070/metrics/json. 
> Reason:
>  GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {code}
> Rootcausing : proxyserver on RM can't be supported for Kerberos enabled 
> cluster because AuthenticationFilter is applied twice in Hadoop code (once in 
> httpServer2 for RM, and another instance from AmFilterInitializer for proxy 
> server). This will require code changes to hadoop-yarn-server-web-proxy 
> project



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8151) Yarn RM Epoch should wrap around

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441717#comment-16441717
 ] 

genericqa commented on YARN-8151:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 13s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 377 unchanged - 0 fixed = 378 total (was 377) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 58s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 24s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy
 |
|   | 
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerNodeLabelUpdate
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8151 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919494/YARN-8151.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  

[jira] [Commented] (YARN-7654) Support ENTRY_POINT for docker container

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441711#comment-16441711
 ] 

genericqa commented on YARN-7654:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 21s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 111 unchanged - 0 fixed = 114 total (was 111) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
24s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
53s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-7654 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919498/YARN-7654.014.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 2b44c5ccfe84 3.13.0-139-generic 

[jira] [Created] (YARN-8172) Unit test coverage for C code of winutils

2018-04-17 Thread Xiao Liang (JIRA)
Xiao Liang created YARN-8172:


 Summary: Unit test coverage for C code of winutils
 Key: YARN-8172
 URL: https://issues.apache.org/jira/browse/YARN-8172
 Project: Hadoop YARN
  Issue Type: Test
Reporter: Xiao Liang


Currently there's no UT coverage for the C code of winutils, which should be 
added.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7654) Support ENTRY_POINT for docker container

2018-04-17 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7654:

Attachment: YARN-7654.015.patch

> Support ENTRY_POINT for docker container
> 
>
> Key: YARN-7654
> URL: https://issues.apache.org/jira/browse/YARN-7654
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7654.001.patch, YARN-7654.002.patch, 
> YARN-7654.003.patch, YARN-7654.004.patch, YARN-7654.005.patch, 
> YARN-7654.006.patch, YARN-7654.007.patch, YARN-7654.008.patch, 
> YARN-7654.009.patch, YARN-7654.010.patch, YARN-7654.011.patch, 
> YARN-7654.012.patch, YARN-7654.013.patch, YARN-7654.014.patch, 
> YARN-7654.015.patch
>
>
> Docker image may have ENTRY_POINT predefined, but this is not supported in 
> the current implementation.  It would be nice if we can detect existence of 
> {{launch_command}} and base on this variable launch docker container in 
> different ways:
> h3. Launch command exists
> {code}
> docker run [image]:[version]
> docker exec [container_id] [launch_command]
> {code}
> h3. Use ENTRY_POINT
> {code}
> docker run [image]:[version]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6675) Add NM support to launch opportunistic containers based on overallocation

2018-04-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6675:
-
Attachment: YARN-6675-YARN-1011.00.patch

> Add NM support to launch opportunistic containers based on overallocation
> -
>
> Key: YARN-6675
> URL: https://issues.apache.org/jira/browse/YARN-6675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-6675-YARN-1011.00.patch, 
> YARN-6675-YARN-1011.prelim0.patch, YARN-6675-YARN-1011.prelim1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-04-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441664#comment-16441664
 ] 

Eric Yang commented on YARN-8160:
-

On the second thought, IP address is managed by docker.  Docker run may not 
give us the same IP address as previous instance.  Hostname is likely preserved 
because we set the hostname.  This means launching new container for upgrade is 
most likely the only implementation that we can support at this time.

> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-04-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441660#comment-16441660
 ] 

Eric Yang commented on YARN-8160:
-

[~csingh]  When I trigger upgrade to happen on docker container for YARN-7939 
testing, the existing container shutdown correctly, but relaunch did not bring 
up any docker container.  Shane's concern over docker start retaining all 
existing parameters is a real problem.  This could prevent upgrade from using 
relaunch to start the container because some parameters might need to be 
changed.  On the other hand, we may have problems that if end user application 
encode hostname or IP address to their application.  Launching new containers 
might break existing user application.  It is hard to say, which approach is 
safer, but I am leaning toward launch new containers while try to preserve 
hostname / IP as much as possible.  This means we need similar code path as 
relaunch, but not depend on "docker start", and pass in new docker run commad.  
[~shaneku...@gmail.com] is this something that can be done by refactoring 
relaunch code to accommodate for upgrade use case?

> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7654) Support ENTRY_POINT for docker container

2018-04-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441610#comment-16441610
 ] 

Eric Yang commented on YARN-7654:
-

Patch 14 fixed white space error.  Patch 13 seems to have caught between daily 
snapshot builds and precommit builds, which made pre-commit report inaccurate.

> Support ENTRY_POINT for docker container
> 
>
> Key: YARN-7654
> URL: https://issues.apache.org/jira/browse/YARN-7654
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7654.001.patch, YARN-7654.002.patch, 
> YARN-7654.003.patch, YARN-7654.004.patch, YARN-7654.005.patch, 
> YARN-7654.006.patch, YARN-7654.007.patch, YARN-7654.008.patch, 
> YARN-7654.009.patch, YARN-7654.010.patch, YARN-7654.011.patch, 
> YARN-7654.012.patch, YARN-7654.013.patch, YARN-7654.014.patch
>
>
> Docker image may have ENTRY_POINT predefined, but this is not supported in 
> the current implementation.  It would be nice if we can detect existence of 
> {{launch_command}} and base on this variable launch docker container in 
> different ways:
> h3. Launch command exists
> {code}
> docker run [image]:[version]
> docker exec [container_id] [launch_command]
> {code}
> h3. Use ENTRY_POINT
> {code}
> docker run [image]:[version]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7654) Support ENTRY_POINT for docker container

2018-04-17 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7654:

Attachment: YARN-7654.014.patch

> Support ENTRY_POINT for docker container
> 
>
> Key: YARN-7654
> URL: https://issues.apache.org/jira/browse/YARN-7654
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-7654.001.patch, YARN-7654.002.patch, 
> YARN-7654.003.patch, YARN-7654.004.patch, YARN-7654.005.patch, 
> YARN-7654.006.patch, YARN-7654.007.patch, YARN-7654.008.patch, 
> YARN-7654.009.patch, YARN-7654.010.patch, YARN-7654.011.patch, 
> YARN-7654.012.patch, YARN-7654.013.patch, YARN-7654.014.patch
>
>
> Docker image may have ENTRY_POINT predefined, but this is not supported in 
> the current implementation.  It would be nice if we can detect existence of 
> {{launch_command}} and base on this variable launch docker container in 
> different ways:
> h3. Launch command exists
> {code}
> docker run [image]:[version]
> docker exec [container_id] [launch_command]
> {code}
> h3. Use ENTRY_POINT
> {code}
> docker run [image]:[version]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-04-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441597#comment-16441597
 ] 

Eric Yang commented on YARN-8108:
-

[~daryn] Hadoop 2.7.5 is unaffected by this bug, but it could be a result of 
masked by Jetty difference.  In HttpServer2, there is a section of code that 
does universal initialization of AuthenticationFilter, if Kerberos is enabled:  
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java#L1123

RMAuthenticationFilter extends DelegationTokenAuthenticationFilter which also 
extends from AuthenticationFilter.  Multiple copies of AuthenticationFilter 
conflicts with each other with different initialization settings.  One possible 
solution is to ensure that DelegationTokenAuthenticationFilter is chained after 
SpnegoFilter, and not extended from AuthenticationFilter.  This allows issue of 
DT for authorized user without recheck kerberos ticket in the same request.

> RM metrics rest API throws GSSException in kerberized environment
> -
>
> Key: YARN-8108
> URL: https://issues.apache.org/jira/browse/YARN-8108
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kshitij Badani
>Priority: Major
>
> Test is trying to pull up metrics data from SHS after kiniting as 'test_user'
> It is throwing GSSException as follows
> {code:java}
> b2b460b80713|RUNNING: curl --silent -k -X GET -D 
> /hwqe/hadoopqe/artifacts/tmp-94845 --negotiate -u : 
> http://rm_host:8088/proxy/application_1518674952153_0070/metrics/json2018-02-15
>  07:15:48,757|INFO|MainThread|machine.py:194 - 
> run()||GUID=fc5a3266-28f8-4eed-bae2-b2b460b80713|Exit Code: 0
> 2018-02-15 07:15:48,758|INFO|MainThread|spark.py:1757 - 
> getMetricsJsonData()|metrics:
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /proxy/application_1518674952153_0070/metrics/json. 
> Reason:
>  GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {code}
> Rootcausing : proxyserver on RM can't be supported for Kerberos enabled 
> cluster because AuthenticationFilter is applied twice in Hadoop code (once in 
> httpServer2 for RM, and another instance from AmFilterInitializer for proxy 
> server). This will require code changes to hadoop-yarn-server-web-proxy 
> project



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8151) Yarn RM Epoch should wrap around

2018-04-17 Thread Young Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441583#comment-16441583
 ] 

Young Chen commented on YARN-8151:
--

Thanks for the feedback [~giovanni.fumarola]! New patch attached with testing 
added & suggested fixes.

> Yarn RM Epoch should wrap around
> 
>
> Key: YARN-8151
> URL: https://issues.apache.org/jira/browse/YARN-8151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-8151.01.patch, YARN-8151.01.patch, 
> YARN-8151.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8151) Yarn RM Epoch should wrap around

2018-04-17 Thread Young Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Young Chen updated YARN-8151:
-
Attachment: YARN-8151.02.patch

> Yarn RM Epoch should wrap around
> 
>
> Key: YARN-8151
> URL: https://issues.apache.org/jira/browse/YARN-8151
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Young Chen
>Assignee: Young Chen
>Priority: Major
> Attachments: YARN-8151.01.patch, YARN-8151.01.patch, 
> YARN-8151.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2674) Distributed shell AM may re-launch containers if RM work preserving restart happens

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441522#comment-16441522
 ] 

genericqa commented on YARN-2674:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m  6s{color} 
| {color:red} hadoop-yarn-applications-distributedshell in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-2674 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919398/YARN-2674.6.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ec49aeda246d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 7886037 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/20374/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20374/testReport/ |
| Max. process+thread count | 632 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 |
| Console output | 

[jira] [Commented] (YARN-8096) Wrong condition in AmIpFilter#getProxyAddresses() to update the proxy IP list

2018-04-17 Thread Oleksandr Shevchenko (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441478#comment-16441478
 ] 

Oleksandr Shevchenko commented on YARN-8096:


Thanks a lot [~elgoiri] and [~giovanni.fumarola] for the review and help in 
contributing.

> Wrong condition in AmIpFilter#getProxyAddresses() to update the proxy IP list
> -
>
> Key: YARN-8096
> URL: https://issues.apache.org/jira/browse/YARN-8096
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Oleksandr Shevchenko
>Assignee: Oleksandr Shevchenko
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8096.001.patch, YARN-8096.002.patch, 
> YARN-8096.003.patch
>
>
> In AmIpFilter#getProxyAddresses() we have the following condition:
> {code:java}
>  if (proxyAddresses == null || (lastUpdate + UPDATE_INTERVAL) >= now) { 
>//update RM address 
>  }
> {code}
> By design, the address should be updated if the last update was more then 5 
> min ago. But as we see this condition is wrong.
> Currently, RM address updates permanently. But after 5 minutes after the last 
> update, RM address will never be updated again. As a result, we are always 
> redirected to the fail page that was added by YARN-4767, even if a network 
> issue is resolved now.
> So, we should change this condition to:
> {code:java}
>  if (proxyAddresses == null || (lastUpdate + UPDATE_INTERVAL) <= now) \{ 
>//update RM address 
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7494) Add muti node lookup support for better placement

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441469#comment-16441469
 ] 

genericqa commented on YARN-7494:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 34s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 24 new + 268 unchanged - 1 fixed = 292 total (was 269) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m 45s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}148m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Load of known null value in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getMultiNodePlacementPolicies()
  At CapacitySchedulerConfiguration.java:in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacitySchedulerConfiguration.getMultiNodePlacementPolicies()
  At CapacitySchedulerConfiguration.java:[line 2164] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueState |
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueParsing |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRedirectionErrorPage |
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueMappings |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterService |

[jira] [Commented] (YARN-8161) ServiceState FLEX should be removed

2018-04-17 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441451#comment-16441451
 ] 

Gour Saha commented on YARN-8161:
-

A simple operation of flex up/down cannot be this complex. That's why I am 
simplifying it to an extent where the state is not even required.

A service-owner wants to change the number of containers running for one/more 
components of her service. The new number of containers is all that she needs 
to provide.

So for the API call of -
{code}
curl -i --negotiate -u : -X PUT -H "Content-Type: application/json" 
-d@/tmp/flex.json 
http://eyang-2.openstacklocal:8088/app/v1/services/q1/components/ping
{code}

the JSON payload required is as simple as -
{code}
{
"number_of_containers" : 3
}
{code}


> ServiceState FLEX should be removed
> ---
>
> Key: YARN-8161
> URL: https://issues.apache.org/jira/browse/YARN-8161
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Gour Saha
>Priority: Major
>
> ServiceState FLEX is not required to trigger flex up/down of containers and 
> should be removed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8134) Support specifying node resources in SLS

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441431#comment-16441431
 ] 

Íñigo Goiri commented on YARN-8134:
---

[^YARN-8134.003.patch] looks good.
The unit test testParseNodesFromNodeFile succesfully run.
+1
Committing to trunk.

> Support specifying node resources in SLS
> 
>
> Key: YARN-8134
> URL: https://issues.apache.org/jira/browse/YARN-8134
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8134.002.patch, YARN-8134.003.patch, YARN-8134.patch
>
>
> At present, all nodes have same resources in SLS. We need to add capability 
> to add different resources to different nodes in SLS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8096) Wrong condition in AmIpFilter#getProxyAddresses() to update the proxy IP list

2018-04-17 Thread JIRA

[ 
https://issues.apache.org/jira/browse/YARN-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441428#comment-16441428
 ] 

Íñigo Goiri commented on YARN-8096:
---

Thanks [~oshevchenko] for the patch and [~giovanni.fumarola] for the review.
Committed to trunk.

> Wrong condition in AmIpFilter#getProxyAddresses() to update the proxy IP list
> -
>
> Key: YARN-8096
> URL: https://issues.apache.org/jira/browse/YARN-8096
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Oleksandr Shevchenko
>Assignee: Oleksandr Shevchenko
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8096.001.patch, YARN-8096.002.patch, 
> YARN-8096.003.patch
>
>
> In AmIpFilter#getProxyAddresses() we have the following condition:
> {code:java}
>  if (proxyAddresses == null || (lastUpdate + UPDATE_INTERVAL) >= now) { 
>//update RM address 
>  }
> {code}
> By design, the address should be updated if the last update was more then 5 
> min ago. But as we see this condition is wrong.
> Currently, RM address updates permanently. But after 5 minutes after the last 
> update, RM address will never be updated again. As a result, we are always 
> redirected to the fail page that was added by YARN-4767, even if a network 
> issue is resolved now.
> So, we should change this condition to:
> {code:java}
>  if (proxyAddresses == null || (lastUpdate + UPDATE_INTERVAL) <= now) \{ 
>//update RM address 
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-04-17 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441414#comment-16441414
 ] 

Chandni Singh commented on YARN-8160:
-

I looked at the {{ContainerImpl}} code and I think re-init already uses the 
re-launch logic.
As [~shaneku...@gmail.com] pointed out, re-init first will deactivate the 
existing container then do a relaunch with a new {{ContainerLaunchContext}}. 

We do need to build a new CLC because upgrade lets user modify the 
artifacts/env/configs of their service. Likely scenario is that the version of 
the component (docker based) changed. I haven't tested with a docker based app, 
but I think this re-init should work seamlessly in the docker case. If it 
doesn't then that would be a bug.

I don't clearly understand [~eyang]'s comments on the improvement that is being 
proposed. 


> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8134) Support specifying node resources in SLS

2018-04-17 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441407#comment-16441407
 ] 

Giovanni Matteo Fumarola commented on YARN-8134:


Perfect. [~elgoiri] can you please commit?

> Support specifying node resources in SLS
> 
>
> Key: YARN-8134
> URL: https://issues.apache.org/jira/browse/YARN-8134
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8134.002.patch, YARN-8134.003.patch, YARN-8134.patch
>
>
> At present, all nodes have same resources in SLS. We need to add capability 
> to add different resources to different nodes in SLS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8096) Wrong condition in AmIpFilter#getProxyAddresses() to update the proxy IP list

2018-04-17 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441398#comment-16441398
 ] 

Giovanni Matteo Fumarola commented on YARN-8096:


Thanks [~oshevchenko] for the patch.

LGTM +1.

> Wrong condition in AmIpFilter#getProxyAddresses() to update the proxy IP list
> -
>
> Key: YARN-8096
> URL: https://issues.apache.org/jira/browse/YARN-8096
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Oleksandr Shevchenko
>Assignee: Oleksandr Shevchenko
>Priority: Major
> Attachments: YARN-8096.001.patch, YARN-8096.002.patch, 
> YARN-8096.003.patch
>
>
> In AmIpFilter#getProxyAddresses() we have the following condition:
> {code:java}
>  if (proxyAddresses == null || (lastUpdate + UPDATE_INTERVAL) >= now) { 
>//update RM address 
>  }
> {code}
> By design, the address should be updated if the last update was more then 5 
> min ago. But as we see this condition is wrong.
> Currently, RM address updates permanently. But after 5 minutes after the last 
> update, RM address will never be updated again. As a result, we are always 
> redirected to the fail page that was added by YARN-4767, even if a network 
> issue is resolved now.
> So, we should change this condition to:
> {code:java}
>  if (proxyAddresses == null || (lastUpdate + UPDATE_INTERVAL) <= now) \{ 
>//update RM address 
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8145) yarn rmadmin -getGroups doesn't return updated groups for user

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441350#comment-16441350
 ] 

genericqa commented on YARN-8145:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 67 unchanged - 1 fixed = 69 total (was 68) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 52s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 56s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMAdminService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8145 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919449/YARN-8145.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 179245399c92 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1d6e43d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20371/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-8171) [UI2] AM Node link from attempt page should not redirect to new tab

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441340#comment-16441340
 ] 

genericqa commented on YARN-8171:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
37m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8171 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919459/YARN-8171.001.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux d48ab1f479b2 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1d6e43d |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20372/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [UI2] AM Node link from attempt page should not redirect to new tab
> ---
>
> Key: YARN-8171
> URL: https://issues.apache.org/jira/browse/YARN-8171
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8171.001.patch
>
>
> Avoid node link redirection to old UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-7088.
--
Resolution: Fixed

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its launch, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7088:
-
Description: Currently, the start time in the old and new UI actually shows 
the app submission time. There should actually be two different fields; one for 
the app's submission and one for its launch, as well as the elapsed pending 
time between the two.  (was: Currently, the start time in the old and new UI 
actually shows the app submission time. There should actually be two different 
fields; one for the app's submission and one for its start, as well as the 
elapsed pending time between the two.)

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its launch, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441292#comment-16441292
 ] 

Haibo Chen commented on YARN-7088:
--

Oops, I did not refresh often enough.   I'll commit the Jira shortly. Thanks 
[~sunilg] for the review!

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441288#comment-16441288
 ] 

Haibo Chen commented on YARN-7088:
--

That's a good point, [~sunilg]. The launchTime per attempt should be added. 
[~kanwaljeets] can you file a follow-up Jira for that?

 

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7494) Add muti node lookup support for better placement

2018-04-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441269#comment-16441269
 ] 

Sunil G commented on YARN-7494:
---

Thanks [~cheersyang] for the comments. I ll be updating ur comment in next 
patch today. This patch is to run jenkins first and see if some issues. Thank 
you.

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.002.patch, 
> YARN-7494.003.patch, YARN-7494.004.patch, YARN-7494.005.patch, 
> YARN-7494.006.patch, YARN-7494.007.patch, YARN-7494.v0.patch, 
> YARN-7494.v1.patch, multi-node-designProposal.png
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7494) Add muti node lookup support for better placement

2018-04-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7494:
--
Attachment: YARN-7494.007.patch

> Add muti node lookup support for better placement
> -
>
> Key: YARN-7494
> URL: https://issues.apache.org/jira/browse/YARN-7494
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-7494.001.patch, YARN-7494.002.patch, 
> YARN-7494.003.patch, YARN-7494.004.patch, YARN-7494.005.patch, 
> YARN-7494.006.patch, YARN-7494.007.patch, YARN-7494.v0.patch, 
> YARN-7494.v1.patch, multi-node-designProposal.png
>
>
> Instead of single node, for effectiveness we can consider a multi node lookup 
> based on partition to start with.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8171) [UI2] AM Node link from attempt page should not redirect to new tab

2018-04-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441259#comment-16441259
 ] 

Sunil G commented on YARN-8171:
---

[~rohithsharma], cud u pls review. Thanks

> [UI2] AM Node link from attempt page should not redirect to new tab
> ---
>
> Key: YARN-8171
> URL: https://issues.apache.org/jira/browse/YARN-8171
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8171.001.patch
>
>
> Avoid node link redirection to old UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8171) [UI2] AM Node link from attempt page should not redirect to new tab

2018-04-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8171:
--
Attachment: YARN-8171.001.patch

> [UI2] AM Node link from attempt page should not redirect to new tab
> ---
>
> Key: YARN-8171
> URL: https://issues.apache.org/jira/browse/YARN-8171
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8171.001.patch
>
>
> Avoid node link redirection to old UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8145) yarn rmadmin -getGroups doesn't return updated groups for user

2018-04-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-8145:
-

Assignee: Sunil G  (was: Sumana Sathish)

> yarn rmadmin -getGroups doesn't return updated groups for user
> --
>
> Key: YARN-8145
> URL: https://issues.apache.org/jira/browse/YARN-8145
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8145.001.patch
>
>
> Post YARN-8062, we sees that still some cache clearing issue for yarn rmadmin 
> -getGroups which causes getGroup results in stale data for sometime. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8145) yarn rmadmin -getGroups doesn't return updated groups for user

2018-04-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-8145:
-

Assignee: Sumana Sathish  (was: Sunil G)

> yarn rmadmin -getGroups doesn't return updated groups for user
> --
>
> Key: YARN-8145
> URL: https://issues.apache.org/jira/browse/YARN-8145
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Sumana Sathish
>Priority: Major
> Attachments: YARN-8145.001.patch
>
>
> Post YARN-8062, we sees that still some cache clearing issue for yarn rmadmin 
> -getGroups which causes getGroup results in stale data for sometime. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7966) Remove AllocationConfiguration#getQueueAcl and related unit test

2018-04-17 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441221#comment-16441221
 ] 

Yufei Gu commented on YARN-7966:


Thank [~Sen Zhao] for woking on this. Looks good to me overall. Could you 
remove the unused imports?

> Remove AllocationConfiguration#getQueueAcl and related unit test
> 
>
> Key: YARN-7966
> URL: https://issues.apache.org/jira/browse/YARN-7966
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Sen Zhao
>Priority: Minor
>  Labels: newbie++
> Attachments: YARN-7966.001.patch
>
>
> AllocationConfiguration#getQueueAcl isn't needed any more after YARN-4997. We 
> should remove it and its related unit test. All its logic is reimplemented in 
> AllocationFileLoaderService#getDefaultPermissions. class 
> AllocationConfiguration doesn't have any API annotation, it is considered as 
> private. So it is OK to remove its public method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Kanwaljeet Sachdev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441212#comment-16441212
 ] 

Kanwaljeet Sachdev commented on YARN-7088:
--

thanks [~sunilg] for inputs. Before resolving this Jira, I would file one as 
per your suggestion. There are few more around ATS that I anyways have to file 
as well that relate to this JIRA

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7996) Allow user supplied Docker client configurations with YARN native services

2018-04-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441213#comment-16441213
 ] 

Shane Kumpf commented on YARN-7996:
---

Thanks [~jlowe]!

> Allow user supplied Docker client configurations with YARN native services
> --
>
> Key: YARN-7996
> URL: https://issues.apache.org/jira/browse/YARN-7996
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-7996.001.patch, YARN-7996.002.patch, 
> YARN-7996.003.patch, YARN-7996.004.patch, YARN-7996.005.patch, 
> YARN-7996.006.patch
>
>
> YARN-5428 added support to distributed shell for supplying a Docker client 
> configuration at application submission time. The auth tokens within the 
> client configuration are then used to pull images from private Docker 
> repositories/registries. Add the same support to the YARN Native Services 
> framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8171) [UI2] AM Node link from attempt page should not redirect to new tab

2018-04-17 Thread Sunil G (JIRA)
Sunil G created YARN-8171:
-

 Summary: [UI2] AM Node link from attempt page should not redirect 
to new tab
 Key: YARN-8171
 URL: https://issues.apache.org/jira/browse/YARN-8171
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Reporter: Sunil G
Assignee: Sunil G


Avoid node link redirection to old UI



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8145) yarn rmadmin -getGroups doesn't return updated groups for user

2018-04-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441173#comment-16441173
 ] 

Sunil G commented on YARN-8145:
---

Fixed test case with the v2 patch for YARN-8062. Tested in a real cluster and 
works as expected.

cc [~rohithsharma] [~vinodkv] [~leftnoteasy]

> yarn rmadmin -getGroups doesn't return updated groups for user
> --
>
> Key: YARN-8145
> URL: https://issues.apache.org/jira/browse/YARN-8145
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8145.001.patch
>
>
> Post YARN-8062, we sees that still some cache clearing issue for yarn rmadmin 
> -getGroups which causes getGroup results in stale data for sometime. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8145) yarn rmadmin -getGroups doesn't return updated groups for user

2018-04-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reassigned YARN-8145:
-

Assignee: Sunil G

> yarn rmadmin -getGroups doesn't return updated groups for user
> --
>
> Key: YARN-8145
> URL: https://issues.apache.org/jira/browse/YARN-8145
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Sunil G
>Priority: Major
> Attachments: YARN-8145.001.patch
>
>
> Post YARN-8062, we sees that still some cache clearing issue for yarn rmadmin 
> -getGroups which causes getGroup results in stale data for sometime. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8145) yarn rmadmin -getGroups doesn't return updated groups for user

2018-04-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8145:
--
Attachment: YARN-8145.001.patch

> yarn rmadmin -getGroups doesn't return updated groups for user
> --
>
> Key: YARN-8145
> URL: https://issues.apache.org/jira/browse/YARN-8145
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-8145.001.patch
>
>
> Post YARN-8062, we sees that still some cache clearing issue for yarn rmadmin 
> -getGroups which causes getGroup results in stale data for sometime. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8169) Review RackResolver.java

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441135#comment-16441135
 ] 

genericqa commented on YARN-8169:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: 
The patch generated 0 new + 2 unchanged - 3 fixed = 2 total (was 5) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
17s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919441/YARN-8169.1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d537321e303c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 1d6e43d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20370/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20370/console |
| Powered by | Apache Yetus 

[jira] [Updated] (YARN-7996) Allow user supplied Docker client configurations with YARN native services

2018-04-17 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-7996:
-
Fix Version/s: 3.1.1

Ah, my apologies for the misunderstanding.  The wording of the description 
implied to me this was extending a feature to a new subsystem, so I committed 
it as a feature rather than a bugfix.

I committed this to branch-3.1 as well.


> Allow user supplied Docker client configurations with YARN native services
> --
>
> Key: YARN-7996
> URL: https://issues.apache.org/jira/browse/YARN-7996
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-7996.001.patch, YARN-7996.002.patch, 
> YARN-7996.003.patch, YARN-7996.004.patch, YARN-7996.005.patch, 
> YARN-7996.006.patch
>
>
> YARN-5428 added support to distributed shell for supplying a Docker client 
> configuration at application submission time. The auth tokens within the 
> client configuration are then used to pull images from private Docker 
> repositories/registries. Add the same support to the YARN Native Services 
> framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441115#comment-16441115
 ] 

Sunil G commented on YARN-7088:
---

Thanks for the clarification. I can see the reason for this now. However could 
u pls confirm whether we have launchTime per attempt as well published in 
similar way. If not, we can add in other jira. I think this Jira is good to go 
now. Thanks and sorry for the trouble lately.

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7786) NullPointerException while launching ApplicationMaster

2018-04-17 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441114#comment-16441114
 ] 

Jason Lowe commented on YARN-7786:
--

Thanks for the report and patch!  My apologies for the delay in reviewing.

Rather than have createAMContainerLaunchContext start returning null and update 
the one place that calls it to now have to check for null before throwing, why 
not have createAMContainerLaunchContext throw directly?

"hbas" should be "has".

The launch method should not need to have protected visibility, as the unit 
test is overriding it for no reason other than to abuse the visibility.  A 
cleaner approach would be to move TestApplicationMasterLauncher into the same 
package as the class it is testing, 
org.apache.hadoop.yarn.server.resourcemanager.amIauncher.  That enables the 
method to be package-private yet still be accessible to the unit test and 
eliminates the need for the Private annotation (and arguably the 
VisibleForTesting annotation as well).

Why does the code suppress the resource warning rather than closing the RM when 
the test completes?

> NullPointerException while launching ApplicationMaster
> --
>
> Key: YARN-7786
> URL: https://issues.apache.org/jira/browse/YARN-7786
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: lujie
>Assignee: lujie
>Priority: Minor
> Attachments: YARN-7786.patch, YARN-7786_1.patch, YARN-7786_2.patch, 
> YARN-7786_3.patch, resourcemanager.log
>
>
> Before launching the ApplicationMaster, send kill command to the job, then 
> some Null pointer appears:
> {code}
> 2017-11-25 21:27:25,333 INFO 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Error 
> launching appattempt_1511616410268_0001_01. Got exception: 
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.setupTokens(AMLauncher.java:205)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.createAMContainerLaunchContext(AMLauncher.java:193)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.launch(AMLauncher.java:112)
>  at 
> org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher.run(AMLauncher.java:304)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7189) Container-executor doesn't remove Docker containers that error out early

2018-04-17 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441108#comment-16441108
 ] 

Eric Badger commented on YARN-7189:
---

Thanks for the review and commit, [~jlowe]!

> Container-executor doesn't remove Docker containers that error out early
> 
>
> Key: YARN-7189
> URL: https://issues.apache.org/jira/browse/YARN-7189
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.9.0, 2.8.3, 3.0.1
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Fix For: 2.10.0, 2.9.2, 3.0.3
>
> Attachments: YARN-7189-b3.0.001.patch, 
> YARN-7189-branch-3.0.001.patch, YARN-7189-branch-3.0.002.patch, 
> YARN-7189-branch-3.0.003.patch, YARN-7189-branch-3.0.004.patch
>
>
> Once the docker run command is executed, the docker container is created 
> unless the return code is 125 meaning that the run command itself failed 
> (https://docs.docker.com/engine/reference/run/#exit-status). Any error that 
> happens after the docker run needs to remove the container during cleanup.
> {noformat:title=container-executor.c:launch_docker_container_as_user}
>   snprintf(docker_command_with_binary, command_size, "%s %s", docker_binary, 
> docker_command);
>   fprintf(LOGFILE, "Launching docker container...\n");
>   FILE* start_docker = popen(docker_command_with_binary, "r");
> {noformat}
> This is fixed by YARN-5366, which changes how we remove containers. However, 
> that was committed into 3.1.0. 2.8, 2.9, and 3.0 are all affected



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7996) Allow user supplied Docker client configurations with YARN native services

2018-04-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441100#comment-16441100
 ] 

Shane Kumpf commented on YARN-7996:
---

[~jlowe] - Would it be possible to cherry pick this to branch-3.1 as well? 
YARN-5428 introduced a bug and was committed in 3.1, this has the fix for that 
bug.

> Allow user supplied Docker client configurations with YARN native services
> --
>
> Key: YARN-7996
> URL: https://issues.apache.org/jira/browse/YARN-7996
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7996.001.patch, YARN-7996.002.patch, 
> YARN-7996.003.patch, YARN-7996.004.patch, YARN-7996.005.patch, 
> YARN-7996.006.patch
>
>
> YARN-5428 added support to distributed shell for supplying a Docker client 
> configuration at application submission time. The auth tokens within the 
> client configuration are then used to pull images from private Docker 
> repositories/registries. Add the same support to the YARN Native Services 
> framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8170) Caching Node Rack Location

2018-04-17 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created YARN-8170:
-

 Summary: Caching Node Rack Location
 Key: YARN-8170
 URL: https://issues.apache.org/jira/browse/YARN-8170
 Project: Hadoop YARN
  Issue Type: New Feature
  Components: applications, nodemanager
Affects Versions: 3.0.1
Reporter: BELUGA BEHR


When the MapReduce ApplicationMaster is trying to assign Mappers to Nodes, it 
loops all of the queued Mappers and looks up the ideal rack location of each 
Mapper.

Under the covers, the rack awareness script is being called, once per Mapper. 
The results do get cached, but for only as long as the ApplicationMaster 
exists. That means that the script gets called N times each time a new 
ApplicationMaster is launched. If the rack awareness script is complex or 
requires an external lookup, this can be a slow process and can even DDOS the 
external lookup source.

There are at least a couple of ways to tackle this...
 # Add a DNSToSwitchMapping implementation that caches in an external cache 
(i.e., memcached) instead of memory so that all ApplicationMasters can share 
the same cache and would rarely call the rack awareness script.
 # Like the shuffle service, add a new NodeManager auxiliary which exposes a 
rack lookup API so that the NodeManagers are responsible for caching the rack 
locations. This would also require a DNSToSwitchMapping implementation that 
interacts with this new service.
 # Other?

{code:java}
  String host = allocated.getNodeId().getHost();
  String rack = RackResolver.resolve(host).getNetworkLocation();
{code}
[https://github.com/apache/hadoop/blob/453d48bdfbb67ed3e66c33c4aef239c3d7bdd3bc/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/main/java/org/apache/hadoop/mapreduce/v2/app/rm/RMContainerAllocator.java#L1435-L1464]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-04-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441045#comment-16441045
 ] 

Shane Kumpf edited comment on YARN-8160 at 4/17/18 3:43 PM:


{quote}It might help to refine relaunch logic and reuse some of the existing 
code.
{quote}
I think there are parts of Relaunch that could be useful in the upgrade case, 
but I believe some semantics will clash. The idea behind Relaunch is to reuse 
the same NodeManager, the container id, previous localized resources, and the 
working directory. In the Docker case, we go a step further and run a {{docker 
start}} on the existing container. IIUC, the current upgrade design uses 
re-init which cleans up the existing container prior to launching with a new 
CLC. These are quite different in their approaches.

Where I see Relaunch features being beneficial is the reuse of the container 
id/NM, which may allow for reusing the IP address previously assigned. Reuse of 
the existing work dir could also be useful in some case, but I expect those 
cases are limited. However, I'm concerned about Relaunch skipping localization 
and {{docker start}} won't work. At this point, it seems like re-init might be 
more appropriate.

[~csingh] - based on your experience here, do you have a high level idea of how 
this will work in the Docker case?

 


was (Author: shaneku...@gmail.com):
{quote}It might help to refine relaunch logic and reuse some of the existing 
code.
{quote}
 

I think there are parts of Relaunch that could be useful in the upgrade case, 
but I believe some semantics will clash. The idea behind Relaunch is tore use 
the same NodeManager, the container id, localization, and the working 
directory. In the Docker case, we go a step further and run a {{docker start}} 
on the existing container. IIUC, the current upgrade design uses re-init which 
cleans up the existing container prior to launching with a new CLC. These are 
quite different in their approaches.

Where I see Relaunch features being beneficial is the reuse of the container 
id/NM, which may allow for reusing the IP address previously assigned. Reuse of 
the existing work dir could also be useful in some case, but I expect those 
cases are limited. However, I'm concerned about Relaunch skipping localization 
and {{docker start}} won't work. At this point, it seems like re-init might be 
more appropriate.

[~csingh] - based on your experience here, do you have a high level idea of how 
this will work in the Docker case?

 

> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-04-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441045#comment-16441045
 ] 

Shane Kumpf commented on YARN-8160:
---

{quote}It might help to refine relaunch logic and reuse some of the existing 
code.
{quote}
 

I think there are parts of Relaunch that could be useful in the upgrade case, 
but I believe some semantics will clash. The idea behind Relaunch is tore use 
the same NodeManager, the container id, localization, and the working 
directory. In the Docker case, we go a step further and run a {{docker start}} 
on the existing container. IIUC, the current upgrade design uses re-init which 
cleans up the existing container prior to launching with a new CLC. These are 
quite different in their approaches.

Where I see Relaunch features being beneficial is the reuse of the container 
id/NM, which may allow for reusing the IP address previously assigned. Reuse of 
the existing work dir could also be useful in some case, but I expect those 
cases are limited. However, I'm concerned about Relaunch skipping localization 
and {{docker start}} won't work. At this point, it seems like re-init might be 
more appropriate.

[~csingh] - based on your experience here, do you have a high level idea of how 
this will work in the Docker case?

 

> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8169) Review RackResolver.java

2018-04-17 Thread BELUGA BEHR (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated YARN-8169:
--
Attachment: YARN-8169.1.patch

> Review RackResolver.java
> 
>
> Key: YARN-8169
> URL: https://issues.apache.org/jira/browse/YARN-8169
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.0.1
>Reporter: BELUGA BEHR
>Priority: Trivial
> Attachments: YARN-8169.1.patch
>
>
> # Use SLF4J
> # Fix some checkstyle warnings
> # Minor clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8169) Review RackResolver.java

2018-04-17 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created YARN-8169:
-

 Summary: Review RackResolver.java
 Key: YARN-8169
 URL: https://issues.apache.org/jira/browse/YARN-8169
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Affects Versions: 3.0.1
Reporter: BELUGA BEHR
 Attachments: YARN-8169.1.patch

# Use SLF4J
# Fix some checkstyle warnings
# Minor clean up



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Kanwaljeet Sachdev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441041#comment-16441041
 ] 

Kanwaljeet Sachdev commented on YARN-7088:
--

[~sunilg], we are looking explicitly at first launchTime and do not want to 
care about subsequent attempt launches if app restarts. Hence when attempt is 
launched, we record the time only if it is unset(i.e. zero). Any subsequent 
launches are not considered. 

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7962) Race Condition When Stopping DelegationTokenRenewer

2018-04-17 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16441037#comment-16441037
 ] 

BELUGA BEHR commented on YARN-7962:
---

[~billie.rinaldi] [~wilfreds] Any additional concerns?  Please commit.

> Race Condition When Stopping DelegationTokenRenewer
> ---
>
> Key: YARN-7962
> URL: https://issues.apache.org/jira/browse/YARN-7962
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: YARN-7962.1.patch, YARN-7962.2.patch, YARN-7962.3.patch, 
> YARN-7962.4.patch
>
>
> [https://github.com/apache/hadoop/blob/69fa81679f59378fd19a2c65db8019393d7c05a2/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/DelegationTokenRenewer.java]
> {code:java}
>   private ThreadPoolExecutor renewerService;
>   private void processDelegationTokenRenewerEvent(
>   DelegationTokenRenewerEvent evt) {
> serviceStateLock.readLock().lock();
> try {
>   if (isServiceStarted) {
> renewerService.execute(new DelegationTokenRenewerRunnable(evt));
>   } else {
> pendingEventQueue.add(evt);
>   }
> } finally {
>   serviceStateLock.readLock().unlock();
> }
>   }
>   @Override
>   protected void serviceStop() {
> if (renewalTimer != null) {
>   renewalTimer.cancel();
> }
> appTokens.clear();
> allTokens.clear();
> this.renewerService.shutdown();
> {code}
> {code:java}
> 2018-02-21 11:18:16,253  FATAL org.apache.hadoop.yarn.event.AsyncDispatcher: 
> Error in dispatcher thread
> java.util.concurrent.RejectedExecutionException: Task 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer$DelegationTokenRenewerRunnable@39bddaf2
>  rejected from java.util.concurrent.ThreadPoolExecutor@5f71637b[Terminated, 
> pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 15487]
>   at 
> java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
>   at 
> java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.processDelegationTokenRenewerEvent(DelegationTokenRenewer.java:196)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer.applicationFinished(DelegationTokenRenewer.java:734)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.finishApplication(RMAppManager.java:199)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:424)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.RMAppManager.handle(RMAppManager.java:65)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:177)
>   at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> What I think is going on here is that the {{serviceStop}} method is not 
> setting the {{isServiceStarted}} flag to 'false'.
> Please update so that the {{serviceStop}} method grabs the 
> {{serviceStateLock}} and sets {{isServiceStarted}} to _false_, before 
> shutting down the {{renewerService}} thread pool, to avoid this condition.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440998#comment-16440998
 ] 

Sunil G commented on YARN-7088:
---

Ok. Thanks for pointing this.

I am still bit confused though. It may be because of the variable name called 
launchTime. An app is started when we create RMApp. But per scheduler, an app 
is started when an attempt is launched (post scheduling). Now I can see that 
the launchTime what we discuss here is the same. Now multiple attempts could 
happen, and one may succeed. Still launchTime will be first attempt's 
launchTime. Are we looking for this time itself ? because for attempt, i can 
understand what is attempt launch. Now for an app, what is the difference b/w 
app start vs app launch. if we clearly looking for this, then its fine. 

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Kanwaljeet Sachdev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440962#comment-16440962
 ] 

Kanwaljeet Sachdev commented on YARN-7088:
--

Hi [~sunilg], Thanks for the comment. I checked, build succeeds in my laptop 
too. On the launchTime thing, the intent was to explicitly record the 
launchTime and not re-write it over restarts. I hope that helps. Similar 
thoughts were shared by [~ayousufi] in the past. Here is the old message

https://issues.apache.org/jira/browse/YARN-7088?focusedCommentId=16151335=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16151335

 

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8108) RM metrics rest API throws GSSException in kerberized environment

2018-04-17 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440940#comment-16440940
 ] 

Daryn Sharp commented on YARN-8108:
---

Analysis looks sound.  Agree each servlet should scope filters for itself, not 
globally.

I'm surprised this hasn't been found before.  Is this specific to 3.x?  Or does 
it exist in 2.x?  (I guess we haven't see this bug due to an alternate auth for 
the RM)

> RM metrics rest API throws GSSException in kerberized environment
> -
>
> Key: YARN-8108
> URL: https://issues.apache.org/jira/browse/YARN-8108
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kshitij Badani
>Priority: Major
>
> Test is trying to pull up metrics data from SHS after kiniting as 'test_user'
> It is throwing GSSException as follows
> {code:java}
> b2b460b80713|RUNNING: curl --silent -k -X GET -D 
> /hwqe/hadoopqe/artifacts/tmp-94845 --negotiate -u : 
> http://rm_host:8088/proxy/application_1518674952153_0070/metrics/json2018-02-15
>  07:15:48,757|INFO|MainThread|machine.py:194 - 
> run()||GUID=fc5a3266-28f8-4eed-bae2-b2b460b80713|Exit Code: 0
> 2018-02-15 07:15:48,758|INFO|MainThread|spark.py:1757 - 
> getMetricsJsonData()|metrics:
> 
> 
> 
> Error 403 GSSException: Failure unspecified at GSS-API level 
> (Mechanism level: Request is a replay (34))
> 
> HTTP ERROR 403
> Problem accessing /proxy/application_1518674952153_0070/metrics/json. 
> Reason:
>  GSSException: Failure unspecified at GSS-API level (Mechanism level: 
> Request is a replay (34))
> 
> 
> {code}
> Rootcausing : proxyserver on RM can't be supported for Kerberos enabled 
> cluster because AuthenticationFilter is applied twice in Hadoop code (once in 
> httpServer2 for RM, and another instance from AmFilterInitializer for proxy 
> server). This will require code changes to hadoop-yarn-server-web-proxy 
> project



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440917#comment-16440917
 ] 

Sunil G commented on YARN-7088:
---

Locally my compilation is passing though.

One major dbt is on RMAppImpl
{code:java}
if(app.launchTime == 0) {
..
..
..
}{code}
If one attempt is launched and {{launchTime}}, then what if this attempts fails 
and 2nd attempt is started. In that case, this value should be updated, correct?

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440890#comment-16440890
 ] 

Sunil G commented on YARN-7088:
---

Oh Sorry [~haibochen], I didnt see that its was committed and u have reverted 
as per the my comment above.

Extremely sorry for late trouble caused.

 

I saw jenkins compile error as below:  I ll also debug in my local now
{code:java}

[ERROR] COMPILATION ERROR : 
[INFO] -
[ERROR] 
/testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/main/java/org/apache/hadoop/mapred/NotRunningJob.java:[90,28]
 error: no suitable method found for 
newInstance(ApplicationId,ApplicationAttemptId,String,String,String,String,int,,YarnApplicationState,String,String,int,int,int,FinalApplicationStatus,,String,float,String,)
[INFO] 1 error{code}

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7996) Allow user supplied Docker client configurations with YARN native services

2018-04-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440891#comment-16440891
 ] 

Shane Kumpf commented on YARN-7996:
---

Thanks [~jlowe] and [~billie.rinaldi]!

> Allow user supplied Docker client configurations with YARN native services
> --
>
> Key: YARN-7996
> URL: https://issues.apache.org/jira/browse/YARN-7996
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7996.001.patch, YARN-7996.002.patch, 
> YARN-7996.003.patch, YARN-7996.004.patch, YARN-7996.005.patch, 
> YARN-7996.006.patch
>
>
> YARN-5428 added support to distributed shell for supplying a Docker client 
> configuration at application submission time. The auth tokens within the 
> client configuration are then used to pull images from private Docker 
> repositories/registries. Add the same support to the YARN Native Services 
> framework.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440831#comment-16440831
 ] 

Haibo Chen commented on YARN-7088:
--

Sure.

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reopened YARN-7088:
--

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440823#comment-16440823
 ] 

Sunil G commented on YARN-7088:
---

HI [~haibochen]

Cud u pls hold on this for a day. I am reviewing this patch now.

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7088) Add application launch time to Resource Manager REST API

2018-04-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7088:
-
Summary: Add application launch time to Resource Manager REST API  (was: 
Fix application start time and add submit time to UIs)

> Add application launch time to Resource Manager REST API
> 
>
> Key: YARN-7088
> URL: https://issues.apache.org/jira/browse/YARN-7088
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: Abdullah Yousufi
>Assignee: Kanwaljeet Sachdev
>Priority: Major
> Attachments: YARN-7088.001.patch, YARN-7088.002.patch, 
> YARN-7088.003.patch, YARN-7088.004.patch, YARN-7088.005.patch, 
> YARN-7088.006.patch, YARN-7088.007.patch, YARN-7088.008.patch, 
> YARN-7088.009.patch, YARN-7088.010.patch, YARN-7088.011.patch, 
> YARN-7088.012.patch, YARN-7088.013.patch, YARN-7088.014.patch, 
> YARN-7088.015.patch, YARN-7088.016.patch, YARN-7088.017.patch
>
>
> Currently, the start time in the old and new UI actually shows the app 
> submission time. There should actually be two different fields; one for the 
> app's submission and one for its start, as well as the elapsed pending time 
> between the two.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2674) Distributed shell AM may re-launch containers if RM work preserving restart happens

2018-04-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440797#comment-16440797
 ] 

Shane Kumpf commented on YARN-2674:
---

Attaching a patch based on [~chenchun]'s previous work. I did not include the 
mini cluster based RM failover integration test as the allocation test appears 
sufficient here. The existing mini cluster based integration tests take quite a 
while to run already, so I'm not keen to add another. If others feel that test 
is needed, I'll can add it.

> Distributed shell AM may re-launch containers if RM work preserving restart 
> happens
> ---
>
> Key: YARN-2674
> URL: https://issues.apache.org/jira/browse/YARN-2674
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Chun Chen
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-easy
> Attachments: YARN-2674.1.patch, YARN-2674.2.patch, YARN-2674.3.patch, 
> YARN-2674.4.patch, YARN-2674.5.patch, YARN-2674.6.patch
>
>
> Currently, if RM work preserving restart happens while distributed shell is 
> running, distribute shell AM may re-launch all the containers, including 
> new/running/complete. We must make sure it won't re-launch the 
> running/complete containers.
> We need to remove allocated containers from 
> AMRMClientImpl#remoteRequestsTable once AM receive them from RM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2674) Distributed shell AM may re-launch containers if RM work preserving restart happens

2018-04-17 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-2674:
--
Attachment: YARN-2674.6.patch

> Distributed shell AM may re-launch containers if RM work preserving restart 
> happens
> ---
>
> Key: YARN-2674
> URL: https://issues.apache.org/jira/browse/YARN-2674
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Chun Chen
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-easy
> Attachments: YARN-2674.1.patch, YARN-2674.2.patch, YARN-2674.3.patch, 
> YARN-2674.4.patch, YARN-2674.5.patch, YARN-2674.6.patch
>
>
> Currently, if RM work preserving restart happens while distributed shell is 
> running, distribute shell AM may re-launch all the containers, including 
> new/running/complete. We must make sure it won't re-launch the 
> running/complete containers.
> We need to remove allocated containers from 
> AMRMClientImpl#remoteRequestsTable once AM receive them from RM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8137) Parallelize node addition in SLS

2018-04-17 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440769#comment-16440769
 ] 

genericqa commented on YARN-8137:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 26s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
23s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b |
| JIRA Issue | YARN-8137 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12919381/YARN-8137.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 399b51abb19c 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 427ad7e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20369/testReport/ |
| Max. process+thread count | 456 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20369/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Parallelize node addition in SLS
> 
>
>   

[jira] [Commented] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2018-04-17 Thread carlhe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440738#comment-16440738
 ] 

carlhe commented on YARN-6629:
--

After I applied the patch and rebuild, I met the following question.
Resourcemanager can not start. The command is " mvn clean package
-Pdist,native -DskipTests -Dtar "

2018-04-17 18:00:53,078 FATAL
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager:
Error starting ResourceManager
org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.
java:354)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.
java:401)
at org.apache.hadoop.yarn.server.resourcemanager.
ResourceManager.startWepApp(ResourceManager.java:1116)
at org.apache.hadoop.yarn.server.resourcemanager.
ResourceManager.serviceStart(ResourceManager.java:1226)
at org.apache.hadoop.service.AbstractService.start(
AbstractService.java:194)
at org.apache.hadoop.yarn.server.resourcemanager.
ResourceManager.main(ResourceManager.java:1422)
Caused by: java.io.FileNotFoundException: webapps/cluster not found in
CLASSPATH
at org.apache.hadoop.http.HttpServer2.getWebAppsPath(
HttpServer2.java:883)
at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:391)
at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:115)
at org.apache.hadoop.http.HttpServer2$Builder.build(
HttpServer2.java:336)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.
java:315)
... 5 more





> NPE occurred when container allocation proposal is applied but its resource 
> requests are removed before
> ---
>
> Key: YARN-6629
> URL: https://issues.apache.org/jira/browse/YARN-6629
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.1.0, 2.10.0
>
> Attachments: YARN-6629.001.patch, YARN-6629.002.patch, 
> YARN-6629.003.patch, YARN-6629.004.patch, YARN-6629.005.patch, 
> YARN-6629.006.patch, YARN-6629.branch-2.001.patch
>
>
> I wrote a test case to reproduce another problem for branch-2 and found new 
> NPE error,  log: 
> {code}
> FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in 
> handling event type NODE_UPDATE to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
> at 
> org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
> at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
> at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
> at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2281)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1247)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1236)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1325)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1112)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:987)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1367)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:143)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
> at java.lang.Thread.run(Thread.java:745)
> 

[jira] [Updated] (YARN-8114) oozie using router submit job error

2018-04-17 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated YARN-8114:
---
Affects Version/s: 3.0.0

> oozie using router submit job error 
> 
>
> Key: YARN-8114
> URL: https://issues.apache.org/jira/browse/YARN-8114
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yiran Wu
>Priority: Major
> Attachments: error_using_spark_for_oozie.log
>
>
> 1. Oozie dependent methods are not implemented。
>  2.When oozie uses spark, spark will use the local $ HADOOP_HOME / etc / 
> hadoop configuration file, then Spark will not connect to the Router。cause 
> NodeManager configurations yarn.nodemanager.amrmproxy.address=0.0.0.0:8049
> {code:java}
>  call org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications 
> from 172.21.14.106:14410 Call#13 Retry#0
> org.apache.commons.lang.NotImplementedException: Code is not implemented
>   at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getApplications(JDFederationClientInterceptor.java:643)
>   at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.getApplications(RouterClientRMService.java:354)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:234)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2090)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1803)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2086)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6629) NPE occurred when container allocation proposal is applied but its resource requests are removed before

2018-04-17 Thread carlhe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440737#comment-16440737
 ] 

carlhe commented on YARN-6629:
--

After I applied the patch and rebuild, I met the following question.
Resourcemanager can not start. The command is " mvn clean package
-Pdist,native -DskipTests -Dtar "

2018-04-17 18:00:53,078 FATAL
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error
starting ResourceManager
org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
at
org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:354)
at
org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:401)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1116)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1226)
at
org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1422)
Caused by: java.io.FileNotFoundException: webapps/cluster not found in
CLASSPATH
at
org.apache.hadoop.http.HttpServer2.getWebAppsPath(HttpServer2.java:883)
at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:391)
at org.apache.hadoop.http.HttpServer2.(HttpServer2.java:115)
at
org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:336)
at
org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:315)
... 5 more





> NPE occurred when container allocation proposal is applied but its resource 
> requests are removed before
> ---
>
> Key: YARN-6629
> URL: https://issues.apache.org/jira/browse/YARN-6629
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Fix For: 3.1.0, 2.10.0
>
> Attachments: YARN-6629.001.patch, YARN-6629.002.patch, 
> YARN-6629.003.patch, YARN-6629.004.patch, YARN-6629.005.patch, 
> YARN-6629.006.patch, YARN-6629.branch-2.001.patch
>
>
> I wrote a test case to reproduce another problem for branch-2 and found new 
> NPE error,  log: 
> {code}
> FATAL event.EventDispatcher (EventDispatcher.java:run(75)) - Error in 
> handling event type NODE_UPDATE to the Event Dispatcher
> java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo.allocate(AppSchedulingInfo.java:446)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:516)
> at 
> org.apache.hadoop.yarn.client.TestNegativePendingResource$1.answer(TestNegativePendingResource.java:225)
> at 
> org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:31)
> at org.mockito.internal.MockHandler.handle(MockHandler.java:97)
> at 
> org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp$$EnhancerByMockitoWithCGLIB$$29eb8afc.apply()
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2396)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.submitResourceCommitRequest(CapacityScheduler.java:2281)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateOrReserveNewContainers(CapacityScheduler.java:1247)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainerOnSingleNode(CapacityScheduler.java:1236)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1325)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocateContainersToNode(CapacityScheduler.java:1112)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.nodeUpdate(CapacityScheduler.java:987)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:1367)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.handle(CapacityScheduler.java:143)
> at 
> org.apache.hadoop.yarn.event.EventDispatcher$EventProcessor.run(EventDispatcher.java:66)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> 

[jira] [Updated] (YARN-8114) oozie using router submit job error

2018-04-17 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated YARN-8114:
---
Description: 
1. Oozie dependent methods are not implemented。
 2.When oozie uses spark, spark will use the local $ HADOOP_HOME / etc / hadoop 
configuration file, then Spark will not connect to the Router。cause NodeManager 
configurations yarn.nodemanager.amrmproxy.address=0.0.0.0:8049
{code:java}
 call org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications 
from 172.21.14.106:14410 Call#13 Retry#0
org.apache.commons.lang.NotImplementedException: Code is not implemented
at 
org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getApplications(JDFederationClientInterceptor.java:643)
at 
org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.getApplications(RouterClientRMService.java:354)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:234)
at 
org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2090)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1803)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2086)

{code}

  was:
1. Oozie dependent methods are not implemented。
2.When oozie uses spark, spark will use the local $ HADOOP_HOME / etc / hadoop 
configuration file, then Spark will not connect to the Router。

{code:java}
 call org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications 
from 172.21.14.106:14410 Call#13 Retry#0
org.apache.commons.lang.NotImplementedException: Code is not implemented
at 
org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getApplications(JDFederationClientInterceptor.java:643)
at 
org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.getApplications(RouterClientRMService.java:354)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:234)
at 
org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2090)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1803)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2086)

{code}



> oozie using router submit job error 
> 
>
> Key: YARN-8114
> URL: https://issues.apache.org/jira/browse/YARN-8114
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yiran Wu
>Priority: Major
> Attachments: error_using_spark_for_oozie.log
>
>
> 1. Oozie dependent methods are not implemented。
>  2.When oozie uses spark, spark will use the local $ HADOOP_HOME / etc / 
> hadoop configuration file, then Spark will not connect to the Router。cause 
> NodeManager configurations yarn.nodemanager.amrmproxy.address=0.0.0.0:8049
> {code:java}
>  call org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications 
> from 172.21.14.106:14410 Call#13 Retry#0
> org.apache.commons.lang.NotImplementedException: Code is not implemented
>   at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getApplications(JDFederationClientInterceptor.java:643)
>   at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.getApplications(RouterClientRMService.java:354)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:234)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425)
>   at 
> 

[jira] [Commented] (YARN-8134) Support specifying node resources in SLS

2018-04-17 Thread Abhishek Modi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440708#comment-16440708
 ] 

Abhishek Modi commented on YARN-8134:
-

Thanks [~giovanni.fumarola] for review.

I only kept only one node with explicit resources in nodes-with-resources to 
verify mix of both nodes-with-resources and nodes without explicit resources.

> Support specifying node resources in SLS
> 
>
> Key: YARN-8134
> URL: https://issues.apache.org/jira/browse/YARN-8134
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8134.002.patch, YARN-8134.003.patch, YARN-8134.patch
>
>
> At present, all nodes have same resources in SLS. We need to add capability 
> to add different resources to different nodes in SLS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8137) Parallelize node addition in SLS

2018-04-17 Thread Abhishek Modi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-8137:

Attachment: YARN-8137.002.patch

> Parallelize node addition in SLS
> 
>
> Key: YARN-8137
> URL: https://issues.apache.org/jira/browse/YARN-8137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8137.001.patch, YARN-8137.002.patch
>
>
> Right now, nodes are added sequentially and it can take a long time if there 
> are large number of nodes. With this change nodes will be added in parallel 
> and thus reduce the node addition time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8137) Parallelize node addition in SLS

2018-04-17 Thread Abhishek Modi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440700#comment-16440700
 ] 

Abhishek Modi commented on YARN-8137:
-

Thanks [~giovanni.fumarola] for reviewing the patch. 
{quote}You have this instruction {{int threadPoolSize = Math.max(poolSize, 
10);}} instead of 10 you should have 
{{SLSConfiguration.RUNNER_POOL_SIZE_DEFAULT;}}
{quote}
Will fix it.
{quote}{{executorService.awaitTermination(1, TimeUnit.HOURS);}} I think the 
awaiting time is too long. How did you come up with this value?
{quote}
I tried with 5000 nodes and it was taking around similar time without this fix. 
I will reduce it to 10 minutes with this fix.

> Parallelize node addition in SLS
> 
>
> Key: YARN-8137
> URL: https://issues.apache.org/jira/browse/YARN-8137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Abhishek Modi
>Assignee: Abhishek Modi
>Priority: Major
> Attachments: YARN-8137.001.patch
>
>
> Right now, nodes are added sequentially and it can take a long time if there 
> are large number of nodes. With this change nodes will be added in parallel 
> and thus reduce the node addition time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8165) Incorrect queue name logging in AbstractContainerAllocator

2018-04-17 Thread Weiwei Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440699#comment-16440699
 ] 

Weiwei Yang commented on YARN-8165:
---

Hi [~sunilg]

Thanks for the cherry-picking. And per [~elgoiri], this should be already 
committed to branch-2. The fixed version looks good to me. Thank you.

> Incorrect queue name logging in AbstractContainerAllocator
> --
>
> Key: YARN-8165
> URL: https://issues.apache.org/jira/browse/YARN-8165
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: YARN-8165.001.patch
>
>
> Found some incorrect logging messages in RM log
> {noformat}
> INFO Reserved container  application=application_1523849397637_0006 
> resource= 
> queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@38224385
>  cluster=
> ...
> INFO AbstractContainerAllocator:131 - assignedContainer application 
> attempt=appattempt_1523849397637_0006_01 container=null 
> queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@38224385
>  clusterResource= type=OFF_SWITCH 
> requestedPartition=
> {noformat}
> Note, value for queue is incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8165) Incorrect queue name logging in AbstractContainerAllocator

2018-04-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-8165:
--
Fix Version/s: 3.0.3
   3.1.1

> Incorrect queue name logging in AbstractContainerAllocator
> --
>
> Key: YARN-8165
> URL: https://issues.apache.org/jira/browse/YARN-8165
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.3
>
> Attachments: YARN-8165.001.patch
>
>
> Found some incorrect logging messages in RM log
> {noformat}
> INFO Reserved container  application=application_1523849397637_0006 
> resource= 
> queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@38224385
>  cluster=
> ...
> INFO AbstractContainerAllocator:131 - assignedContainer application 
> attempt=appattempt_1523849397637_0006_01 container=null 
> queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@38224385
>  clusterResource= type=OFF_SWITCH 
> requestedPartition=
> {noformat}
> Note, value for queue is incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8165) Incorrect queue name logging in AbstractContainerAllocator

2018-04-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440692#comment-16440692
 ] 

Sunil G commented on YARN-8165:
---

Thanks folks for correcting this.

I cherry-picked to branch-3.1 and branch-3.0 as well. It should be in branch-2 
also i think, [~cheersyang] [~elgoiri], cud u pls confirm

> Incorrect queue name logging in AbstractContainerAllocator
> --
>
> Key: YARN-8165
> URL: https://issues.apache.org/jira/browse/YARN-8165
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Trivial
> Fix For: 2.10.0, 3.2.0
>
> Attachments: YARN-8165.001.patch
>
>
> Found some incorrect logging messages in RM log
> {noformat}
> INFO Reserved container  application=application_1523849397637_0006 
> resource= 
> queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@38224385
>  cluster=
> ...
> INFO AbstractContainerAllocator:131 - assignedContainer application 
> attempt=appattempt_1523849397637_0006_01 container=null 
> queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@38224385
>  clusterResource= type=OFF_SWITCH 
> requestedPartition=
> {noformat}
> Note, value for queue is incorrect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8114) oozie using router submit job error

2018-04-17 Thread Yiran Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yiran Wu updated YARN-8114:
---
Attachment: (was: error_using_spark_for_oozie.log)

> oozie using router submit job error 
> 
>
> Key: YARN-8114
> URL: https://issues.apache.org/jira/browse/YARN-8114
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yiran Wu
>Priority: Major
> Attachments: error_using_spark_for_oozie.log
>
>
> 1. Oozie dependent methods are not implemented。
> 2.When oozie uses spark, spark will use the local $ HADOOP_HOME / etc / 
> hadoop configuration file, then Spark will not connect to the Router。
> {code:java}
>  call org.apache.hadoop.yarn.api.ApplicationClientProtocolPB.getApplications 
> from 172.21.14.106:14410 Call#13 Retry#0
> org.apache.commons.lang.NotImplementedException: Code is not implemented
>   at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getApplications(JDFederationClientInterceptor.java:643)
>   at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.getApplications(RouterClientRMService.java:354)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplications(ApplicationClientProtocolPBServiceImpl.java:234)
>   at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:425)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>   at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2090)
>   at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1803)
>   at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2086)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >