[jira] [Commented] (YARN-8665) Yarn Service Upgrade: Support cancelling upgrade

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624473#comment-16624473
 ] 

Hadoop QA commented on YARN-8665:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 44s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 443 unchanged - 4 fixed = 448 total (was 447) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
51s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 54s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 25m  
9s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
15s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
|   | hadoop.yarn.server.nodemanager.containermanager.TestNMProxy |
\\
\\

[jira] [Commented] (YARN-8696) [AMRMProxy] FederationInterceptor upgrade: home sub-cluster heartbeat async

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624468#comment-16624468
 ] 

Hadoop QA commented on YARN-8696:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
58s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
22s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 232 unchanged - 2 fixed = 232 total (was 234) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 81m 33s{color} 
| {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m  
7s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 67m 
56s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
41s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}233m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | YARN-8696 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940811/YARN-8696-branch-2.v6.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 06937e493097 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 6056597 |
| maven | 

[jira] [Commented] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624440#comment-16624440
 ] 

Hadoop QA commented on YARN-8774:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
49s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.8 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 40s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
24s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}108m 36s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ae3769f |
| JIRA Issue | YARN-8774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940869/YARN-8774.branch-2.8.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux dc3eea5880b1 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.8 / 5522481 |
| maven | version: Apache Maven 3.0.5 |
| Default Java | 1.7.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/21941/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21941/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/21941/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 685 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21941/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Memory leak when CapacityScheduler allocates from reserved container 

[jira] [Commented] (YARN-8804) resourceLimits may be wrongly calculated when leaf-queue is blocked in cluster with 3+ level queues

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624398#comment-16624398
 ] 

Hadoop QA commented on YARN-8804:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 43 unchanged - 0 fixed = 44 total (was 43) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 71m 
42s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940860/YARN-8804.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux df910130241f 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4758b4b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/21938/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21938/testReport/ |
| Max. process+thread count | 890 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (YARN-8804) resourceLimits may be wrongly calculated when leaf-queue is blocked in cluster with 3+ level queues

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624395#comment-16624395
 ] 

Hadoop QA commented on YARN-8804:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 42 unchanged - 0 fixed = 43 total (was 42) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 70m 
45s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8804 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940860/YARN-8804.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e1783729632e 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 4758b4b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/21937/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21937/testReport/ |
| Max. process+thread count | 846 (vs. ulimit of 1) |
| modules | C: 

[jira] [Resolved] (YARN-7599) [GPG] ApplicationCleaner in Global Policy Generator

2018-09-21 Thread Botong Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang resolved YARN-7599.

Resolution: Fixed

> [GPG] ApplicationCleaner in Global Policy Generator
> ---
>
> Key: YARN-7599
> URL: https://issues.apache.org/jira/browse/YARN-7599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-7599-YARN-7402.v1.patch, 
> YARN-7599-YARN-7402.v2.patch, YARN-7599-YARN-7402.v3.patch, 
> YARN-7599-YARN-7402.v4.patch, YARN-7599-YARN-7402.v5.patch, 
> YARN-7599-YARN-7402.v6.patch, YARN-7599-YARN-7402.v7.patch, 
> YARN-7599-YARN-7402.v8.patch
>
>
> In Federation, we need a cleanup service for StateStore as well as Yarn 
> Registry. For the former, we need to remove old application records. For the 
> latter, failed and killed applications might leave records in the Yarn 
> Registry (see YARN-6128). We plan to do both cleanup work in 
> ApplicationCleaner in GPG



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7599) [GPG] ApplicationCleaner in Global Policy Generator

2018-09-21 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624388#comment-16624388
 ] 

Botong Huang commented on YARN-7599:


Committed to YARN-7402. Thanks [~bibinchundatt] for the review!

> [GPG] ApplicationCleaner in Global Policy Generator
> ---
>
> Key: YARN-7599
> URL: https://issues.apache.org/jira/browse/YARN-7599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-7599-YARN-7402.v1.patch, 
> YARN-7599-YARN-7402.v2.patch, YARN-7599-YARN-7402.v3.patch, 
> YARN-7599-YARN-7402.v4.patch, YARN-7599-YARN-7402.v5.patch, 
> YARN-7599-YARN-7402.v6.patch, YARN-7599-YARN-7402.v7.patch, 
> YARN-7599-YARN-7402.v8.patch
>
>
> In Federation, we need a cleanup service for StateStore as well as Yarn 
> Registry. For the former, we need to remove old application records. For the 
> latter, failed and killed applications might leave records in the Yarn 
> Registry (see YARN-6128). We plan to do both cleanup work in 
> ApplicationCleaner in GPG



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624382#comment-16624382
 ] 

Hadoop QA commented on YARN-8774:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
47s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m  6s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | YARN-8774 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940863/YARN-8774.branch-2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux b18e00b39e19 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 6056597 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/21939/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21939/testReport/ |
| Max. process+thread count | 810 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21939/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> 

[jira] [Commented] (YARN-8789) Add BoundedQueue to AsyncDispatcher

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624379#comment-16624379
 ] 

Hadoop QA commented on YARN-8789:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 10s{color} | {color:orange} root: The patch generated 9 new + 830 unchanged 
- 11 fixed = 839 total (was 841) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}120m 26s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
29s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 50s{color} 
| {color:red} hadoop-mapreduce-client-app in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}243m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMStoreCommands |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8789 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940841/YARN-8789.5.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 

[jira] [Updated] (YARN-8665) Yarn Service Upgrade: Support cancelling upgrade

2018-09-21 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8665:

Attachment: YARN-8665.004.patch

> Yarn Service Upgrade:  Support cancelling upgrade
> -
>
> Key: YARN-8665
> URL: https://issues.apache.org/jira/browse/YARN-8665
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8665.001.patch, YARN-8665.002.patch, 
> YARN-8665.003.patch, YARN-8665.004.patch
>
>
> When a service is upgraded without auto-finalization or express upgrade, then 
> the upgrade can be cancelled. This provides the user ability to test upgrade 
> of a single instance and if that doesn't go well, they get a chance to cancel 
> it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624367#comment-16624367
 ] 

Tao Yang edited comment on YARN-8774 at 9/22/18 12:17 AM:
--

Thanks [~eepayne] for the review.
Attached v2 patch to improve the UT with adding check for killed state.
There are somewhere different in branch-2.8 (it also exist in branch-2.8) and 
branch-2, especially in branch2.8.
Backported this patch to these two branches and attached them also for review.


was (Author: tao yang):
Thanks [~eepayne] for the review.
Attached v2 patch to improve the UT with adding check for killed state.
There are somewhat different in branch-2.8 (it also exist in branch-2.8) and 
branch-2, especially in branch2.8.
Backported this patch to these two branches and attached them also for review.

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.8.5
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch, YARN-8774.002.patch, 
> YARN-8774.branch-2.001.patch, YARN-8774.branch-2.8.001.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624367#comment-16624367
 ] 

Tao Yang commented on YARN-8774:


Thanks [~eepayne] for the review.
Attached v2 patch to improve the UT with adding check for killed state.
There are somewhat different in branch-2.8 (it also exist in branch-2.8) and 
branch-2, especially in branch2.8.
Backported this patch to these two branches and attached them also for review.

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.8.5
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch, YARN-8774.002.patch, 
> YARN-8774.branch-2.001.patch, YARN-8774.branch-2.8.001.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8774:
---
Attachment: YARN-8774.branch-2.001.patch

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.8.5
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch, YARN-8774.002.patch, 
> YARN-8774.branch-2.001.patch, YARN-8774.branch-2.8.001.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8774:
---
Attachment: YARN-8774.branch-2.8.001.patch

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.8.5
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch, YARN-8774.002.patch, 
> YARN-8774.branch-2.001.patch, YARN-8774.branch-2.8.001.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8774:
---
Attachment: YARN-8774.002.patch

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.8.5
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch, YARN-8774.002.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6456) Allow administrators to set a single ContainerRuntime for all containers

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624355#comment-16624355
 ] 

Hadoop QA commented on YARN-6456:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 55s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| 

[jira] [Commented] (YARN-8734) Readiness check for remote service

2018-09-21 Thread Gour Saha (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624346#comment-16624346
 ] 

Gour Saha commented on YARN-8734:
-

Yup, it would be great to get [~billie.rinaldi] thoughts on the naming as well.

Actually, all properties including dependencies should be under the properties 
section. That's how it is for Component also. Please re-check. I hope I am not 
missing something.

> Readiness check for remote service
> --
>
> Key: YARN-8734
> URL: https://issues.apache.org/jira/browse/YARN-8734
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: Dependency check vs.pdf, YARN-8734.001.patch, 
> YARN-8734.002.patch, YARN-8734.003.patch, YARN-8734.004.patch, 
> YARN-8734.005.patch
>
>
> When a service is deploying, there can be remote service dependency.  It 
> would be nice to describe ZooKeeper as a dependent service, and the service 
> has reached a stable state, then deploy HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8774:
---
Attachment: (was: YARN-8774.branch-2.8.001.patch)

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.8.5
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8734) Readiness check for remote service

2018-09-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624340#comment-16624340
 ] 

Eric Yang edited comment on YARN-8734 at 9/21/18 11:54 PM:
---

{quote}In that case may be a simpler approach will be to call this property 
"dependencies".{quote}

ok, will rename accordingly, if Billie is ok with this.  I think it was renamed 
remote-dependencies based on her feedback earlier.

{quote}Is remote_service_dependencies defined outside the properties section in 
YAML swagger spec?{quote}

Yes, component dependencies is also outside of component properties in the 
component section.  I think this is aligned correctly.


was (Author: eyang):
{quote}In that case may be a simpler approach will be to call this property 
"dependencies".{quote}

ok, will rename accordingly.

{quote}Is remote_service_dependencies defined outside the properties section in 
YAML swagger spec?{quote}

Yes, component dependencies is also outside of component properties in the 
component section.  I think this is aligned correctly.

> Readiness check for remote service
> --
>
> Key: YARN-8734
> URL: https://issues.apache.org/jira/browse/YARN-8734
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: Dependency check vs.pdf, YARN-8734.001.patch, 
> YARN-8734.002.patch, YARN-8734.003.patch, YARN-8734.004.patch, 
> YARN-8734.005.patch
>
>
> When a service is deploying, there can be remote service dependency.  It 
> would be nice to describe ZooKeeper as a dependent service, and the service 
> has reached a stable state, then deploy HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8734) Readiness check for remote service

2018-09-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624340#comment-16624340
 ] 

Eric Yang commented on YARN-8734:
-

{quote}In that case may be a simpler approach will be to call this property 
"dependencies".{quote}

ok, will rename accordingly.

{quote}Is remote_service_dependencies defined outside the properties section in 
YAML swagger spec?{quote}

Yes, component dependencies is also outside of component properties in the 
component section.  I think this is aligned correctly.

> Readiness check for remote service
> --
>
> Key: YARN-8734
> URL: https://issues.apache.org/jira/browse/YARN-8734
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: Dependency check vs.pdf, YARN-8734.001.patch, 
> YARN-8734.002.patch, YARN-8734.003.patch, YARN-8734.004.patch, 
> YARN-8734.005.patch
>
>
> When a service is deploying, there can be remote service dependency.  It 
> would be nice to describe ZooKeeper as a dependent service, and the service 
> has reached a stable state, then deploy HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8774:
---
Attachment: (was: YARN-8774.branch-2.001.patch)

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.8.5
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch, YARN-8774.branch-2.8.001.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8774:
---
Attachment: YARN-8774.branch-2.8.001.patch

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.8.5
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch, YARN-8774.branch-2.8.001.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8734) Readiness check for remote service

2018-09-21 Thread Gour Saha (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624319#comment-16624319
 ] 

Gour Saha commented on YARN-8734:
-

In that case may be a simpler approach will be to call this property 
"dependencies". It is already at the service level so it implies service level 
dependencies. Just like dependencies at the component level implies component 
dependencies and is simply called "dependencies". Additionally, avoiding the 
remote or external keywords helps avoid confusion or limitations in service 
owner's mind. Just like component "dependencies" validate that the values are 
valid component names, expectation would be that service level "dependencies" 
will be valid YARN services only. At least that's exactly what the code does.

One code review comment:

Is {{remote_service_dependencies}} defined outside the properties section in 
YAML swagger spec?

 

> Readiness check for remote service
> --
>
> Key: YARN-8734
> URL: https://issues.apache.org/jira/browse/YARN-8734
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: Dependency check vs.pdf, YARN-8734.001.patch, 
> YARN-8734.002.patch, YARN-8734.003.patch, YARN-8734.004.patch, 
> YARN-8734.005.patch
>
>
> When a service is deploying, there can be remote service dependency.  It 
> would be nice to describe ZooKeeper as a dependent service, and the service 
> has reached a stable state, then deploy HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8734) Readiness check for remote service

2018-09-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624306#comment-16624306
 ] 

Eric Yang commented on YARN-8734:
-

[~gsaha] Local service would imply the current service itself in my 
imagination.  External service may imply service running out side of the 
current Hadoop cluster.  This was the reason that remote service was chosen 
during development.  However, I don't have strong preference to label it as 
remote service, or external service.

> Readiness check for remote service
> --
>
> Key: YARN-8734
> URL: https://issues.apache.org/jira/browse/YARN-8734
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: Dependency check vs.pdf, YARN-8734.001.patch, 
> YARN-8734.002.patch, YARN-8734.003.patch, YARN-8734.004.patch, 
> YARN-8734.005.patch
>
>
> When a service is deploying, there can be remote service dependency.  It 
> would be nice to describe ZooKeeper as a dependent service, and the service 
> has reached a stable state, then deploy HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8774) Memory leak when CapacityScheduler allocates from reserved container with non-default label

2018-09-21 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8774:
---
Attachment: YARN-8774.branch-2.001.patch

> Memory leak when CapacityScheduler allocates from reserved container with 
> non-default label
> ---
>
> Key: YARN-8774
> URL: https://issues.apache.org/jira/browse/YARN-8774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0, 2.8.5
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8774.001.patch, YARN-8774.branch-2.001.patch
>
>
> The cause is that the RMContainerImpl instance of reserved container lost its 
> node label expression, when scheduler reserves containers for non-default 
> node-label requests, it will be wrongly added into 
> LeafQueue#ignorePartitionExclusivityRMContainers and never be removed.
> To reproduce this memory leak:
> (1) create reserved container
> RegularContainerAllocator#doAllocation:  create RMContainerImpl instanceA 
> (nodeLabelExpression="")
> LeafQueue#allocateResource:  RMContainerImpl instanceA is put into  
> LeafQueue#ignorePartitionExclusivityRMContainers
> (2) allocate from reserved container
> RegularContainerAllocator#doAllocation: create RMContainerImpl instanceB 
> (nodeLabelExpression="test-label")
> (3) From now on, RMContainerImpl instanceA will be left in memory (be kept in 
> LeafQueue#ignorePartitionExclusivityRMContainers) forever until RM restarted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8804) resourceLimits may be wrongly calculated when leaf-queue is blocked in cluster with 3+ level queues

2018-09-21 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624281#comment-16624281
 ] 

Tao Yang commented on YARN-8804:


Thanks [~jlowe] for the review.
Attached v3 patch to rebase it on latest trunk and submitted this patch.

> resourceLimits may be wrongly calculated when leaf-queue is blocked in 
> cluster with 3+ level queues
> ---
>
> Key: YARN-8804
> URL: https://issues.apache.org/jira/browse/YARN-8804
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8804.001.patch, YARN-8804.002.patch, 
> YARN-8804.003.patch
>
>
> This problem is due to YARN-4280, parent queue will deduct child queue's 
> headroom when the child queue reached its resource limit and the skipped type 
> is QUEUE_LIMIT, the resource limits of deepest parent queue will be correctly 
> calculated, but for non-deepest parent queue, its headroom may be much more 
> than the sum of reached-limit child queues' headroom, so that the resource 
> limit of non-deepest parent may be much less than its true value and block 
> the allocation for later queues.
> To reproduce this problem with UT:
>  (1) Cluster has two nodes whose node resource both are <10GB, 10core> and 
> 3-level queues as below, among them max-capacity of "c1" is 10 and others are 
> all 100, so that max-capacity of queue "c1" is <2GB, 2core>
> {noformat}
>   Root
>  /  |  \
> a   bc
>10   20   70
>  |   \
> c1   c2
>   10(max=10) 90
> {noformat}
> (2) Submit app1 to queue "c1" and launch am1(resource=<1GB, 1 core>) on nm1
>  (3) Submit app2 to queue "b" and launch am2(resource=<1GB, 1 core>) on nm1
>  (4) app1 and app2 both ask one <2GB, 1core> containers. 
>  (5) nm1 do 1 heartbeat
>  Now queue "c" has lower capacity percentage than queue "b", the allocation 
> sequence will be "a" -> "c" -> "b",
>  queue "c1" has reached queue limit so that requests of app1 should be 
> pending, 
>  headroom of queue "c1" is <1GB, 1core> (=max-capacity - used), 
>  headroom of queue "c" is <18GB, 18core> (=max-capacity - used), 
>  after allocation for queue "c", resource limit of queue "b" will be wrongly 
> calculated as <2GB, 2core>,
>  headroom of queue "b" will be <1GB, 1core> (=resource-limit - used)
>  so that scheduler won't allocate one container for app2 on nm1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8804) resourceLimits may be wrongly calculated when leaf-queue is blocked in cluster with 3+ level queues

2018-09-21 Thread Tao Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-8804:
---
Attachment: YARN-8804.003.patch

> resourceLimits may be wrongly calculated when leaf-queue is blocked in 
> cluster with 3+ level queues
> ---
>
> Key: YARN-8804
> URL: https://issues.apache.org/jira/browse/YARN-8804
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8804.001.patch, YARN-8804.002.patch, 
> YARN-8804.003.patch
>
>
> This problem is due to YARN-4280, parent queue will deduct child queue's 
> headroom when the child queue reached its resource limit and the skipped type 
> is QUEUE_LIMIT, the resource limits of deepest parent queue will be correctly 
> calculated, but for non-deepest parent queue, its headroom may be much more 
> than the sum of reached-limit child queues' headroom, so that the resource 
> limit of non-deepest parent may be much less than its true value and block 
> the allocation for later queues.
> To reproduce this problem with UT:
>  (1) Cluster has two nodes whose node resource both are <10GB, 10core> and 
> 3-level queues as below, among them max-capacity of "c1" is 10 and others are 
> all 100, so that max-capacity of queue "c1" is <2GB, 2core>
> {noformat}
>   Root
>  /  |  \
> a   bc
>10   20   70
>  |   \
> c1   c2
>   10(max=10) 90
> {noformat}
> (2) Submit app1 to queue "c1" and launch am1(resource=<1GB, 1 core>) on nm1
>  (3) Submit app2 to queue "b" and launch am2(resource=<1GB, 1 core>) on nm1
>  (4) app1 and app2 both ask one <2GB, 1core> containers. 
>  (5) nm1 do 1 heartbeat
>  Now queue "c" has lower capacity percentage than queue "b", the allocation 
> sequence will be "a" -> "c" -> "b",
>  queue "c1" has reached queue limit so that requests of app1 should be 
> pending, 
>  headroom of queue "c1" is <1GB, 1core> (=max-capacity - used), 
>  headroom of queue "c" is <18GB, 18core> (=max-capacity - used), 
>  after allocation for queue "c", resource limit of queue "b" will be wrongly 
> calculated as <2GB, 2core>,
>  headroom of queue "b" will be <1GB, 1core> (=resource-limit - used)
>  so that scheduler won't allocate one container for app2 on nm1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8814) Yarn Service Upgrade: Update the swagger definition and docs

2018-09-21 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-8814:
---

 Summary: Yarn Service Upgrade: Update the swagger definition and 
docs
 Key: YARN-8814
 URL: https://issues.apache.org/jira/browse/YARN-8814
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chandni Singh
Assignee: Chandni Singh


Yarn swagger definition is missing states added recently with upgrade. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8734) Readiness check for remote service

2018-09-21 Thread Gour Saha (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624250#comment-16624250
 ] 

Gour Saha commented on YARN-8734:
-

[~eyang] this is a pretty useful feature so thanks for taking this up. Although 
I did not get a chance to test the patch it overall looks okay.

But one question: from a naming perspective, the opposite of remote is local. 
What does local service mean? Are we excluding local services? To me, it seems 
like we wanted to mean external services instead of remote services. Thoughts?

> Readiness check for remote service
> --
>
> Key: YARN-8734
> URL: https://issues.apache.org/jira/browse/YARN-8734
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: Dependency check vs.pdf, YARN-8734.001.patch, 
> YARN-8734.002.patch, YARN-8734.003.patch, YARN-8734.004.patch, 
> YARN-8734.005.patch
>
>
> When a service is deploying, there can be remote service dependency.  It 
> would be nice to describe ZooKeeper as a dependent service, and the service 
> has reached a stable state, then deploy HBase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8696) [AMRMProxy] FederationInterceptor upgrade: home sub-cluster heartbeat async

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624248#comment-16624248
 ] 

Hadoop QA commented on YARN-8696:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
17s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
34s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
26s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 232 unchanged - 2 fixed = 232 total (was 234) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
36s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 39s{color} 
| {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
52s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 70m  
9s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
40s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}233m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | YARN-8696 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940811/YARN-8696-branch-2.v6.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 840a61e355de 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 6056597 |
| maven | 

[jira] [Updated] (YARN-6456) Allow administrators to set a single ContainerRuntime for all containers

2018-09-21 Thread Craig Condit (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Condit updated YARN-6456:
---
Attachment: (was: YARN-6456.005.patch)

> Allow administrators to set a single ContainerRuntime for all containers
> 
>
> Key: YARN-6456
> URL: https://issues.apache.org/jira/browse/YARN-6456
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Miklos Szegedi
>Assignee: Craig Condit
>Priority: Major
>  Labels: Docker
> Attachments: YARN-6456-ForceDockerRuntimeIfSupported.patch, 
> YARN-6456.001.patch, YARN-6456.002.patch, YARN-6456.003.patch, 
> YARN-6456.004.patch, YARN-6456.005.patch
>
>
>  
> With LCE, there are multiple ContainerRuntimes available for handling 
> different types of containers; default, docker, java sandbox. Admins should 
> have the ability to override the user decision and set a single global 
> ContainerRuntime to be used for all containers.
> Original Description:
> {quote}One reason to use Docker containers is to be able to isolate different 
> workloads, even, if they run as the same user.
> I have noticed some issues in the current design:
>  1. DockerLinuxContainerRuntime mounts containerLocalDirs 
> {{nm-local-dir/usercache/user/appcache/application_1491598755372_0011/}} and 
> userLocalDirs {{nm-local-dir/usercache/user/}}, so that a container can see 
> and modify the files of another container. I think the application file cache 
> directory should be enough for the container to run in most of the cases.
>  2. The whole cgroups directory is mounted. Would the container directory be 
> enough?
>  3. There is no way to enforce exclusive use of Docker for all containers. 
> There should be an option that it is not the user but the admin that requires 
> to use Docker.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8808) Use aggregate container utilization instead of node utilization to determine resources available for oversubscription

2018-09-21 Thread Arun Suresh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624224#comment-16624224
 ] 

Arun Suresh edited comment on YARN-8808 at 9/21/18 9:50 PM:


While working on YARN-1013, I check also for whether the 
aggregateUtilization/nodeUtilization == 0. This implies that nothing is running 
on the node, which implies that we should not over-allocate on the node right ?

Also, I am thinking a combination of containerUtilization + nodeUtilization 
should be used though. Consider the situation where the container utilization 
is high but the node utilization is low - Node has capacity for 4 1GB 
containers, but is currently running 2 containers each using more than 1.9GB - 
in this case, overallocation should be allowed.


was (Author: asuresh):
While working on YARN-1013, looks like we should also check for whether the 
aggregateUtilization/nodeUtilization == 0. This implies that nothing is running 
on the node, which implies that we should not over-allocate on the node right ?

Also, I am thinking a combination of containerUtilization + nodeUtilization 
should be used though. Consider the situation where the container utilization 
is high but the node utilization is low - Node has capacity for 4 1GB 
containers, but is currently running 2 containers each using more than 1.9GB - 
in this case, overallocation should be allowed.

> Use aggregate container utilization instead of node utilization to determine 
> resources available for oversubscription
> -
>
> Key: YARN-8808
> URL: https://issues.apache.org/jira/browse/YARN-8808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8088-YARN-1011.01.patch, 
> YARN-8808-YARN-1011.00.patch
>
>
> Resource oversubscription should be bound to the amount of the resources that 
> can be allocated to containers, hence the allocation threshold should be with 
> respect to aggregate container utilization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8808) Use aggregate container utilization instead of node utilization to determine resources available for oversubscription

2018-09-21 Thread Arun Suresh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624224#comment-16624224
 ] 

Arun Suresh commented on YARN-8808:
---

While working on YARN-1013, looks like we should also check for whether the 
aggregateUtilization/nodeUtilization == 0. This implies that nothing is running 
on the node, which implies that we should not over-allocate on the node right ?

Also, I am thinking a combination of containerUtilization + nodeUtilization 
should be used though. Consider the situation where the container utilization 
is high but the node utilization is low - Node has capacity for 4 1GB 
containers, but is currently running 2 containers each using more than 1.9GB - 
in this case, overallocation should be allowed.

> Use aggregate container utilization instead of node utilization to determine 
> resources available for oversubscription
> -
>
> Key: YARN-8808
> URL: https://issues.apache.org/jira/browse/YARN-8808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8088-YARN-1011.01.patch, 
> YARN-8808-YARN-1011.00.patch
>
>
> Resource oversubscription should be bound to the amount of the resources that 
> can be allocated to containers, hence the allocation threshold should be with 
> respect to aggregate container utilization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8623) Update Docker examples to use image which exists

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624175#comment-16624175
 ] 

Hadoop QA commented on YARN-8623:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
35m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8623 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940835/YARN-8623.001.patch |
| Optional Tests |  dupname  asflicense  mvnsite  |
| uname | Linux c0eceb5594a6 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0cd6346 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 333 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21934/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Update Docker examples to use image which exists
> 
>
> Key: YARN-8623
> URL: https://issues.apache.org/jira/browse/YARN-8623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Craig Condit
>Assignee: Craig Condit
>Priority: Minor
>  Labels: Docker
> Attachments: YARN-8623.001.patch
>
>
> The example Docker image given in the documentation 
> (images/hadoop-docker:latest) does not exist. We could change 
> images/hadoop-docker:latest to apache/hadoop-runner:latest, which does exist. 
> We'd need to do a quick sanity test to see if the image works with YARN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8809) Refactor AbstractYarnScheduler and CapacityScheduler OPPORTUNISTIC container completion codepaths

2018-09-21 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624169#comment-16624169
 ] 

Haibo Chen commented on YARN-8809:
--

Thanks for the review and commit, [~asuresh]

> Refactor AbstractYarnScheduler and CapacityScheduler OPPORTUNISTIC container 
> completion codepaths
> -
>
> Key: YARN-8809
> URL: https://issues.apache.org/jira/browse/YARN-8809
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8809-YARN-1011.00.patch, 
> YARN-8809-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8789) Add BoundedQueue to AsyncDispatcher

2018-09-21 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated YARN-8789:
--
Attachment: YARN-8789.5.patch

> Add BoundedQueue to AsyncDispatcher
> ---
>
> Key: YARN-8789
> URL: https://issues.apache.org/jira/browse/YARN-8789
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: applications
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: YARN-8789.1.patch, YARN-8789.2.patch, YARN-8789.3.patch, 
> YARN-8789.4.patch, YARN-8789.5.patch
>
>
> I recently came across a scenario where an MR ApplicationMaster was failing 
> with an OOM exception.  It had many thousands of Mappers and thousands of 
> Reducers.  It was noted that in the logging that the event-queue of 
> {{AsyncDispatcher}} had a very large number of item in it and was seemingly 
> never decreasing.
> I started looking at the code and thought it could use some clean up, 
> simplification, and the ability to specify a bounded queue so that any 
> incoming events are throttled until they can be processed.  This will protect 
> the ApplicationMaster from a flood of events.
> Logging Message:
> Size of event-queue is xxx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7599) [GPG] ApplicationCleaner in Global Policy Generator

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624130#comment-16624130
 ] 

Hadoop QA commented on YARN-7599:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-7402 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
52s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
4s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
32s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} YARN-7402 passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  6m 
19s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} YARN-7402 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} YARN-7402 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
51s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
47s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
45s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
50s{color} | {color:green} hadoop-yarn-server-globalpolicygenerator in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-7599 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940829/YARN-7599-YARN-7402.v8.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname 

[jira] [Updated] (YARN-6510) Fix profs stat file warning caused by process names that includes parenthesis

2018-09-21 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-6510:
-
Fix Version/s: 2.8.6

Thanks, [~wilfreds]!  I committed this to branch-2.8 as well.

> Fix profs stat file warning caused by process names that includes parenthesis
> -
>
> Key: YARN-6510
> URL: https://issues.apache.org/jira/browse/YARN-6510
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.6
>
> Attachments: YARN-6510.01.patch
>
>
> Even with the fix for YARN-3344 we still have issues with the procfs format.
> This is the case that is causing issues:
> {code}
> [user@nm1 ~]$ cat /proc/2406/stat
> 2406 (ib_fmr(mlx4_0)) S 2 0 0 0 -1 2149613632 0 0 0 0 166 126908 0 0 20 0 1 0 
> 4284 0 0 18446744073709551615 0 0 0 0 0 0 0 2147483647 0 18446744073709551615 
> 0 0 17 6 0 0 0 0 0
> {code}
> We do not handle the parenthesis in the name which causes the pattern 
> matching to fail



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-09-21 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624109#comment-16624109
 ] 

Wangda Tan commented on YARN-8785:
--

[~simonprewo], instead of uploading patch file, you can rename the pull request 
title as mentioned by: 
[https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-CreatingaGitHubpullrequest]

To trigger Jenkins.

> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8623) Update Docker examples to use image which exists

2018-09-21 Thread Craig Condit (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Condit reassigned YARN-8623:
--

Assignee: Craig Condit

> Update Docker examples to use image which exists
> 
>
> Key: YARN-8623
> URL: https://issues.apache.org/jira/browse/YARN-8623
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Craig Condit
>Assignee: Craig Condit
>Priority: Minor
>  Labels: Docker
>
> The example Docker image given in the documentation 
> (images/hadoop-docker:latest) does not exist. We could change 
> images/hadoop-docker:latest to apache/hadoop-runner:latest, which does exist. 
> We'd need to do a quick sanity test to see if the image works with YARN.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8789) Add BoundedQueue to AsyncDispatcher

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624091#comment-16624091
 ] 

Hadoop QA commented on YARN-8789:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 11s{color} | {color:orange} root: The patch generated 7 new + 830 unchanged 
- 11 fixed = 837 total (was 841) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
39s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}116m 48s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
1s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 
44s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}245m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8789 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940798/YARN-8789.4.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 53954ecb8192 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 

[jira] [Commented] (YARN-8813) Improve debug messages for NM preemption of OPPORTUNISTIC containers

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624089#comment-16624089
 ] 

Hadoop QA commented on YARN-8813:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
26s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 22m 
58s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:c6870a1 |
| JIRA Issue | YARN-8813 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940814/YARN-8813-YARN-1011.00.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0e760fec90eb 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-1011 / 36ec27e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/21931/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21931/testReport/ |
| 

[jira] [Commented] (YARN-7599) [GPG] ApplicationCleaner in Global Policy Generator

2018-09-21 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624053#comment-16624053
 ] 

Botong Huang commented on YARN-7599:


Ah good point. v8 uploaded. Will commit pending on yetus. Thanks!

> [GPG] ApplicationCleaner in Global Policy Generator
> ---
>
> Key: YARN-7599
> URL: https://issues.apache.org/jira/browse/YARN-7599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-7599-YARN-7402.v1.patch, 
> YARN-7599-YARN-7402.v2.patch, YARN-7599-YARN-7402.v3.patch, 
> YARN-7599-YARN-7402.v4.patch, YARN-7599-YARN-7402.v5.patch, 
> YARN-7599-YARN-7402.v6.patch, YARN-7599-YARN-7402.v7.patch, 
> YARN-7599-YARN-7402.v8.patch
>
>
> In Federation, we need a cleanup service for StateStore as well as Yarn 
> Registry. For the former, we need to remove old application records. For the 
> latter, failed and killed applications might leave records in the Yarn 
> Registry (see YARN-6128). We plan to do both cleanup work in 
> ApplicationCleaner in GPG



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8809) Refactor AbstractYarnScheduler and CapacityScheduler OPPORTUNISTIC container completion codepaths

2018-09-21 Thread Arun Suresh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-8809:
--
Summary: Refactor AbstractYarnScheduler and CapacityScheduler OPPORTUNISTIC 
container completion codepaths  (was: Fair Scheduler does not decrement queue 
metrics when OPPORTUNISTIC containers are released.)

> Refactor AbstractYarnScheduler and CapacityScheduler OPPORTUNISTIC container 
> completion codepaths
> -
>
> Key: YARN-8809
> URL: https://issues.apache.org/jira/browse/YARN-8809
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8809-YARN-1011.00.patch, 
> YARN-8809-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7599) [GPG] ApplicationCleaner in Global Policy Generator

2018-09-21 Thread Botong Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-7599:
---
Attachment: YARN-7599-YARN-7402.v8.patch

> [GPG] ApplicationCleaner in Global Policy Generator
> ---
>
> Key: YARN-7599
> URL: https://issues.apache.org/jira/browse/YARN-7599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-7599-YARN-7402.v1.patch, 
> YARN-7599-YARN-7402.v2.patch, YARN-7599-YARN-7402.v3.patch, 
> YARN-7599-YARN-7402.v4.patch, YARN-7599-YARN-7402.v5.patch, 
> YARN-7599-YARN-7402.v6.patch, YARN-7599-YARN-7402.v7.patch, 
> YARN-7599-YARN-7402.v8.patch
>
>
> In Federation, we need a cleanup service for StateStore as well as Yarn 
> Registry. For the former, we need to remove old application records. For the 
> latter, failed and killed applications might leave records in the Yarn 
> Registry (see YARN-6128). We plan to do both cleanup work in 
> ApplicationCleaner in GPG



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1011) [Umbrella] Schedule containers based on utilization of currently allocated containers

2018-09-21 Thread Arun Suresh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-1011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624032#comment-16624032
 ] 

Arun Suresh commented on YARN-1011:
---

I was trying to rebase the branch with trunk..
Got a couple of merge conflicts, mostly with some FS* classes.
[~haibochen], can you take a look ?

> [Umbrella] Schedule containers based on utilization of currently allocated 
> containers
> -
>
> Key: YARN-1011
> URL: https://issues.apache.org/jira/browse/YARN-1011
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Arun C Murthy
>Assignee: Karthik Kambatla
>Priority: Major
> Attachments: patch-for-yarn-1011.patch, yarn-1011-design-v0.pdf, 
> yarn-1011-design-v1.pdf, yarn-1011-design-v2.pdf, yarn-1011-design-v3.pdf
>
>
> Currently RM allocates containers and assumes resources allocated are 
> utilized.
> RM can, and should, get to a point where it measures utilization of allocated 
> containers and, if appropriate, allocate more (speculative?) containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8809) Fair Scheduler does not decrement queue metrics when OPPORTUNISTIC containers are released.

2018-09-21 Thread Arun Suresh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624029#comment-16624029
 ] 

Arun Suresh commented on YARN-8809:
---

Thanks [~haibochen]
+1, will commit shortly

> Fair Scheduler does not decrement queue metrics when OPPORTUNISTIC containers 
> are released.
> ---
>
> Key: YARN-8809
> URL: https://issues.apache.org/jira/browse/YARN-8809
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8809-YARN-1011.00.patch, 
> YARN-8809-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8769) [Submarine] Allow user to specify customized quicklink(s) when submit Submarine job

2018-09-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624018#comment-16624018
 ] 

Hudson commented on YARN-8769:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15035 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15035/])
YARN-8769. [Submarine] Allow user to specify customized quicklink(s) (sunilg: 
rev 0cd63461021cc7cac39e7cc2bfaafd609c82fc79)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/param/Quicklink.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/CliConstants.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/param/RunJobParameters.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/client/cli/RunJobCli.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/main/java/org/apache/hadoop/yarn/submarine/runtimes/yarnservice/YarnServiceJobSubmitter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine/src/test/java/org/apache/hadoop/yarn/submarine/client/cli/yarnservice/TestYarnServiceRunJobCli.java


> [Submarine] Allow user to specify customized quicklink(s) when submit 
> Submarine job
> ---
>
> Key: YARN-8769
> URL: https://issues.apache.org/jira/browse/YARN-8769
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Fix For: 3.2.0
>
> Attachments: YARN-8769.001.patch, YARN-8769.002.patch, 
> YARN-8769.003.patch, YARN-8769.004.patch
>
>
> This will be helpful when user submit a job and some links need to be shown 
> on YARN UI2 (service page). For example, user can specify a quick link to 
> Zeppelin notebook UI when a Zeppelin notebook got launched.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8808) Use aggregate container utilization instead of node utilization to determine resources available for oversubscription

2018-09-21 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624015#comment-16624015
 ] 

Haibo Chen commented on YARN-8808:
--

The unit test failure is independent of this patch.

> Use aggregate container utilization instead of node utilization to determine 
> resources available for oversubscription
> -
>
> Key: YARN-8808
> URL: https://issues.apache.org/jira/browse/YARN-8808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8088-YARN-1011.01.patch, 
> YARN-8808-YARN-1011.00.patch
>
>
> Resource oversubscription should be bound to the amount of the resources that 
> can be allocated to containers, hence the allocation threshold should be with 
> respect to aggregate container utilization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6456) Allow administrators to set a single ContainerRuntime for all containers

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624009#comment-16624009
 ] 

Hadoop QA commented on YARN-6456:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 29s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 441 unchanged - 0 fixed = 444 total (was 441) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
19s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 52s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
54s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} 

[jira] [Commented] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-09-21 Thread Zian Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16624007#comment-16624007
 ] 

Zian Chen commented on YARN-8785:
-

Hi [~simonprewo], thanks for the patch. The patch itself looks good to me. One 
more add-on with  [~eyang] comments, after rename the patch as " 
YARN-8785.001.patch", please click submit patch on the top bottom and drop in 
the patch file as an attachment, then it will trigger Jenkins build to verify 
if anything is affected by this patch. Thanks for the effort

> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-09-21 Thread Zian Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623999#comment-16623999
 ] 

Zian Chen commented on YARN-8777:
-

Thanks [~eyang] for the work. I'm ok with patch 003. One quick question, you 
mentioned 
{code:java}
It is entirely possible to use ProcessBuilder and launch container-executor to 
run docker exec, and send unix command to be executed.
{code}
Is processbuilder mentioned to be an possible way for code reuse on passing 
arbitrary commands?  If yes, then this approach might run into similar issue 
for enum approach, which can only handle a small set of command options not 
arbitrary commands. 

> Container Executor C binary change to execute interactive docker command
> 
>
> Key: YARN-8777
> URL: https://issues.apache.org/jira/browse/YARN-8777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8777.001.patch, YARN-8777.002.patch, 
> YARN-8777.003.patch
>
>
> Since Container Executor provides Container execution using the native 
> container-executor binary, we also need to make changes to accept new 
> “dockerExec” method to invoke the corresponding native function to execute 
> docker exec command to the running container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8808) Use aggregate container utilization instead of node utilization to determine resources available for oversubscription

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623994#comment-16623994
 ] 

Hadoop QA commented on YARN-8808:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-1011 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 
 7s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
21s{color} | {color:green} YARN-1011 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} YARN-1011 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 69m 48s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:c6870a1 |
| JIRA Issue | YARN-8808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940801/YARN-8088-YARN-1011.01.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4926d4068272 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-1011 / 36ec27e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/21928/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21928/testReport/ |
| Max. process+thread count | 1029 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 

[jira] [Commented] (YARN-8806) Enable local staging directory and clean it up when submarine job is submitted

2018-09-21 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623985#comment-16623985
 ] 

Wangda Tan commented on YARN-8806:
--

[~yuan_zac], I think we should create a process when submit jobs. Submarine is 
not yet ready to be used as a Java library. In my mind we could support CLI and 
REST API. Not a Java library.

> Enable local staging directory and clean it up when submarine job is submitted
> --
>
> Key: YARN-8806
> URL: https://issues.apache.org/jira/browse/YARN-8806
> Project: Hadoop YARN
>  Issue Type: Sub-task
> Environment: In the /tmp dir, there are launch scripts which are not 
> cleaned up as follows:
> -rw-r--r-- 1 hadoop netease 1100 Sep 18 10:46 
> PRIMARY_WORKER-launch-script8635233314077649086.sh
> -rw-r--r-- 1 hadoop netease 1100 Sep 18 10:46 
> WORKER-launch-script129488020578466938.sh
> -rw-r--r-- 1 hadoop netease 1028 Sep 18 10:46 
> PS-launch-script471092031021738136.sh
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-8806.001.patch, YARN-8806.002.patch, 
> YARN-8806.003.patch
>
>
> YarnServiceJobSubmitter.generateCommandLaunchScript creates container launch 
> scripts in the local filesystem.  Container launch scripts would be uploaded 
> to hdfs staging dir, but would not be not deleted after the job is submitted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8563) [Submarine] Support users to specify Python/TF package/version/dependencies for training job.

2018-09-21 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623970#comment-16623970
 ] 

Wangda Tan commented on YARN-8563:
--

Thanks [~liuxun323], 

The primary purpose of this ticket is to avoid user building Docker image every 
time. 

Specifying a pre-baked image is easy, but customize an image is hard. IMO, data 
scientist has requirements to update their programs frequently to do 
experiments. So this unavoidably needs to change dependencies in some cases. I 
spoke to some DS, many of them are not prefer to build Docker image by 
themselves.

I agree with you that for many cases they can use a pre-baked image. But 
providing an option to allow them specify dependency instead of find docker 
file, and rebuild the docker image is definitely a cheaper solution for both of 
DS and underlying system. So I would view this to be a combination of base 
image + dependencies. 

I also agree that specifying version of TF / Python may not help here since we 
can name Docker image such as tf-1.8.0-python3:latest.

Another thing we haven't done is how to localize user's code to the training 
environment. I don't think it is a good idea to ask user to put training code 
into Docker image. Instead they can provide a path to zip on HDFS/S3 and YARN 
can get it downloaded, unpacked. 

> [Submarine] Support users to specify Python/TF package/version/dependencies 
> for training job.
> -
>
> Key: YARN-8563
> URL: https://issues.apache.org/jira/browse/YARN-8563
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Major
>
> YARN-8561 assumes all Python / Tensorflow dependencies will be packed to 
> docker image. In practice, user doesn't want to build docker image. Instead, 
> user can provide python package / dependencies (like .whl), Python and TF 
> version. And Submarine can localize specified dependencies to prebuilt base 
> Docker images.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8813) Improve debug messages for NM preemption of OPPORTUNISTIC containers

2018-09-21 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623959#comment-16623959
 ] 

Haibo Chen commented on YARN-8813:
--

The patch adds a few debug log statements, hence no test is modified or added.

> Improve debug messages for NM preemption of OPPORTUNISTIC containers
> 
>
> Key: YARN-8813
> URL: https://issues.apache.org/jira/browse/YARN-8813
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8813-YARN-1011.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8813) Improve debug messages for NM preemption of OPPORTUNISTIC containers

2018-09-21 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8813:
-
Attachment: YARN-8813-YARN-1011.00.patch

> Improve debug messages for NM preemption of OPPORTUNISTIC containers
> 
>
> Key: YARN-8813
> URL: https://issues.apache.org/jira/browse/YARN-8813
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8813-YARN-1011.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8813) Improve debug messages for NM preemption of OPPORTUNISTIC containers

2018-09-21 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-8813:


 Summary: Improve debug messages for NM preemption of 
OPPORTUNISTIC containers
 Key: YARN-8813
 URL: https://issues.apache.org/jira/browse/YARN-8813
 Project: Hadoop YARN
  Issue Type: Sub-task
Affects Versions: YARN-1011
Reporter: Haibo Chen
Assignee: Haibo Chen






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-09-21 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623951#comment-16623951
 ] 

Eric Badger commented on YARN-8777:
---

I'm +1 (non-binding) on patch 003

> Container Executor C binary change to execute interactive docker command
> 
>
> Key: YARN-8777
> URL: https://issues.apache.org/jira/browse/YARN-8777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8777.001.patch, YARN-8777.002.patch, 
> YARN-8777.003.patch
>
>
> Since Container Executor provides Container execution using the native 
> container-executor binary, we also need to make changes to accept new 
> “dockerExec” method to invoke the corresponding native function to execute 
> docker exec command to the running container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8696) [AMRMProxy] FederationInterceptor upgrade: home sub-cluster heartbeat async

2018-09-21 Thread Botong Huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Botong Huang updated YARN-8696:
---
Attachment: YARN-8696-branch-2.v6.patch

> [AMRMProxy] FederationInterceptor upgrade: home sub-cluster heartbeat async
> ---
>
> Key: YARN-8696
> URL: https://issues.apache.org/jira/browse/YARN-8696
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Major
> Attachments: YARN-8696-branch-2.v6.patch, YARN-8696.v1.patch, 
> YARN-8696.v2.patch, YARN-8696.v3.patch, YARN-8696.v4.patch, 
> YARN-8696.v5.patch, YARN-8696.v6.patch
>
>
> Today in _FederationInterceptor_, the heartbeat to home sub-cluster is 
> synchronous. After the heartbeat is sent out to home sub-cluster, it waits 
> for the home response to come back before merging and returning the (merged) 
> heartbeat result to back AM. If home sub-cluster is suffering from connection 
> issues, or down during an YarnRM master-slave switch, all heartbeat threads 
> in _FederationInterceptor_ will be blocked waiting for home response. As a 
> result, the successful UAM heartbeats from secondary sub-clusters will not be 
> returned to AM at all. Additionally, because of the fact that we kept the 
> same heartbeat responseId between AM and home RM, lots of tricky handling are 
> needed regarding the responseId resync when it comes to 
> _FederationInterceptor_ (part of AMRMProxy, NM) work preserving restart 
> (YARN-6127, YARN-1336), home RM master-slave switch etc. 
> In this patch, we change the heartbeat to home sub-cluster to asynchronous, 
> same as the way we handle UAM heartbeats in secondaries. So that any 
> sub-cluster down or connection issues won't impact AM getting responses from 
> other sub-clusters. The responseId is also managed separately for home 
> sub-cluster and AM, and they increment independently. The resync logic 
> becomes much cleaner. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8658) [AMRMProxy] Metrics for AMRMClientRelayer inside FederationInterceptor

2018-09-21 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623929#comment-16623929
 ] 

Giovanni Matteo Fumarola commented on YARN-8658:


Pushed to Branch-2. 

Thanks [~youchen] for the patch and [~botong] for the review.

> [AMRMProxy] Metrics for AMRMClientRelayer inside FederationInterceptor
> --
>
> Key: YARN-8658
> URL: https://issues.apache.org/jira/browse/YARN-8658
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Young Chen
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: YARN-8658-branch-2.09.patch, 
> YARN-8658-branch-2.10.patch, YARN-8658-branch-2.11.patch, YARN-8658.01.patch, 
> YARN-8658.02.patch, YARN-8658.03.patch, YARN-8658.04.patch, 
> YARN-8658.05.patch, YARN-8658.06.patch, YARN-8658.07.patch, 
> YARN-8658.08.patch, YARN-8658.09.patch
>
>
> AMRMClientRelayer (YARN-7900) is introduced for stateful 
> FederationInterceptor (YARN-7899), to keep track of all pending requests sent 
> to every subcluster YarnRM. We need to add metrics for AMRMClientRelayer to 
> show the state of things in FederationInterceptor. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8811) Support Container Storage Interface (CSI) in YARN

2018-09-21 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623901#comment-16623901
 ] 

Wangda Tan commented on YARN-8811:
--

Thanks [~cheersyang], [~sunilg] for the design doc. 

In general the plan looks good, I have some suggestions regarding to resource 
type related changes: 

The name "IGNORABLE" is ambiguous, user will be configured about is it 
ignorable by main scheduler / nm or app, etc. And from the name it looks like 
"no one cares about the resource type". 

Instead, I prefer to have a String[] tags added to each ResourceInformation:

* All resource types configured inside resource-types.xml have a tag = 
"default".
* All existing methods inside Resource work as-is. And returns "default" tag 
only.
* A new method will be added to Resource to get/set ResourceInformation by tags.
* For resource types like volume, admin doesn't need, and shouldn't configure 
inside resource-types.xml
* Volume still uses the same COUNTABLE resource type.
* If we want to support per-node disk information report to RM, NM should have 
a plugin extends {{...resourceplugin.ResourcePlugin}}.

Above suggestion should require minimum changes to scheduler, minimum overhead 
for scheduler to make decisions. (Except we may need to copy the extra volume 
information). And in the future if new plugins need to be added to scheduler / 
node manager to handle different resource tags, developers can easily do that.

> Support Container Storage Interface (CSI) in YARN
> -
>
> Key: YARN-8811
> URL: https://issues.apache.org/jira/browse/YARN-8811
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Support Container Storage Interface(CSI) in YARN_design 
> doc_20180921.pdf
>
>
> The Container Storage Interface (CSI) is a vendor neutral interface to bridge 
> Container Orchestrators and Storage Providers. With the adoption of CSI in 
> YARN, it will be easier to integrate 3rd party storage systems, and provide 
> the ability to attach persistent volumes for stateful applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-09-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623900#comment-16623900
 ] 

Eric Yang commented on YARN-8777:
-

[~ebadger] Sounds good to me.  [~Zian Chen] [~ebadger] [~shaneku...@gmail.com] 
Are we good to check in the C side?

> Container Executor C binary change to execute interactive docker command
> 
>
> Key: YARN-8777
> URL: https://issues.apache.org/jira/browse/YARN-8777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8777.001.patch, YARN-8777.002.patch, 
> YARN-8777.003.patch
>
>
> Since Container Executor provides Container execution using the native 
> container-executor binary, we also need to make changes to accept new 
> “dockerExec” method to invoke the corresponding native function to execute 
> docker exec command to the running container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-09-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623894#comment-16623894
 ] 

Eric Yang commented on YARN-8785:
-

[~simonprewo] Can you post the patch 417.patch as YARN-8785.001.patch to 
trigger Jenkins test?  Just making sure that it doesn't have white space 
problem.  Sorry Hadoop development has a lot of manual process.  Thanks for the 
patch.

> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8811) Support Container Storage Interface (CSI) in YARN

2018-09-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623889#comment-16623889
 ] 

Eric Yang commented on YARN-8811:
-

[~cheersyang] Thank you for the proposal.  There are several common features in 
docker and kubernetes for mounting volumes.  Mount propagation flags, file 
system type, source and destination mount points.  For object store, there is 
user API key information that need to be processed.  It would be nice if those 
features can be specified as part of the spec.

> Support Container Storage Interface (CSI) in YARN
> -
>
> Key: YARN-8811
> URL: https://issues.apache.org/jira/browse/YARN-8811
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: Support Container Storage Interface(CSI) in YARN_design 
> doc_20180921.pdf
>
>
> The Container Storage Interface (CSI) is a vendor neutral interface to bridge 
> Container Orchestrators and Storage Providers. With the adoption of CSI in 
> YARN, it will be easier to integrate 3rd party storage systems, and provide 
> the ability to attach persistent volumes for stateful applications.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8807) FairScheduler crashes RM with oversubscription turned on if an application is killed.

2018-09-21 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623809#comment-16623809
 ] 

Haibo Chen commented on YARN-8807:
--

The unit test failure TestCapacityOverTimePolicy.testAllocation, is unrelated 
to this patch.

> FairScheduler crashes RM with oversubscription turned on if an application is 
> killed.
> -
>
> Key: YARN-8807
> URL: https://issues.apache.org/jira/browse/YARN-8807
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, resourcemanager
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8807-YARN-1011.00.patch
>
>
> When an application, that has got opportunistic containers allocated, is 
> killed, its containers are not released immediately.
> Fair scheduler would therefore continue to try to promote such orphaned 
> containers, which results in NPE.
> {code:java}
> java.lang.NullPointerException
>     at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptToAssignReservedResourcesOrPromoteOpportunisticContainers(FairScheduler.java:1158)
>     at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.attemptScheduling(FairScheduler.java:1129)
>     at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.nodeUpdate(FairScheduler.java:1001)
>     at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.handle(FairScheduler.java:1275)
>     at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler.testKillingApplicationWithOpportunisticContainersAssigned(TestFairScheduler.java:4019){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8808) Use aggregate container utilization instead of node utilization to determine resources available for oversubscription

2018-09-21 Thread Haibo Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-8808:
-
Attachment: YARN-8088-YARN-1011.01.patch

> Use aggregate container utilization instead of node utilization to determine 
> resources available for oversubscription
> -
>
> Key: YARN-8808
> URL: https://issues.apache.org/jira/browse/YARN-8808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8088-YARN-1011.01.patch, 
> YARN-8808-YARN-1011.00.patch
>
>
> Resource oversubscription should be bound to the amount of the resources that 
> can be allocated to containers, hence the allocation threshold should be with 
> respect to aggregate container utilization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8808) Use aggregate container utilization instead of node utilization to determine resources available for oversubscription

2018-09-21 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623806#comment-16623806
 ] 

Haibo Chen commented on YARN-8808:
--

New patch to address the whitespace issue

> Use aggregate container utilization instead of node utilization to determine 
> resources available for oversubscription
> -
>
> Key: YARN-8808
> URL: https://issues.apache.org/jira/browse/YARN-8808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-1011
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
> Attachments: YARN-8088-YARN-1011.01.patch, 
> YARN-8808-YARN-1011.00.patch
>
>
> Resource oversubscription should be bound to the amount of the resources that 
> can be allocated to containers, hence the allocation threshold should be with 
> respect to aggregate container utilization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8789) Add BoundedQueue to AsyncDispatcher

2018-09-21 Thread BELUGA BEHR (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

BELUGA BEHR updated YARN-8789:
--
Attachment: YARN-8789.4.patch

> Add BoundedQueue to AsyncDispatcher
> ---
>
> Key: YARN-8789
> URL: https://issues.apache.org/jira/browse/YARN-8789
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: applications
>Affects Versions: 3.2.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: YARN-8789.1.patch, YARN-8789.2.patch, YARN-8789.3.patch, 
> YARN-8789.4.patch
>
>
> I recently came across a scenario where an MR ApplicationMaster was failing 
> with an OOM exception.  It had many thousands of Mappers and thousands of 
> Reducers.  It was noted that in the logging that the event-queue of 
> {{AsyncDispatcher}} had a very large number of item in it and was seemingly 
> never decreasing.
> I started looking at the code and thought it could use some clean up, 
> simplification, and the ability to specify a bounded queue so that any 
> incoming events are throttled until they can be processed.  This will protect 
> the ApplicationMaster from a flood of events.
> Logging Message:
> Size of event-queue is xxx



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8563) [Submarine] Support users to specify Python/TF package/version/dependencies for training job.

2018-09-21 Thread Xun Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623763#comment-16623763
 ] 

Xun Liu commented on YARN-8563:
---

I don't think submarine needs to care about image management and production 
issues for the following reasons:
 # YARN-3.x has supported runtime environment isolation through docker. If too 
much consideration is given to supporting dynamic installation dependencies on 
a base image, then the meaning of docker is lost.
 # Submarine need to provide a docker-hub repository for the submarine project, 
which provides several stable versions of the tensorflow version, plus a full 
python library as the base image, which is sufficient for the user to use.
Because ordinary users are generally able to use the off-the-shelf docker image 
to complete their work, it is satisfied.
If they are advanced users, they will customize it using our base image.

So I think that the production and management of the image is handled by the 
user and does not need to be included in the submarine project. Submarine need 
more focus on the high performance, stability and ease of use of the submarine.

> [Submarine] Support users to specify Python/TF package/version/dependencies 
> for training job.
> -
>
> Key: YARN-8563
> URL: https://issues.apache.org/jira/browse/YARN-8563
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Major
>
> YARN-8561 assumes all Python / Tensorflow dependencies will be packed to 
> docker image. In practice, user doesn't want to build docker image. Instead, 
> user can provide python package / dependencies (like .whl), Python and TF 
> version. And Submarine can localize specified dependencies to prebuilt base 
> Docker images.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-8563) [Submarine] Support users to specify Python/TF package/version/dependencies for training job.

2018-09-21 Thread Xun Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xun Liu updated YARN-8563:
--
Comment: was deleted

(was: I don't think submarine needs to care about image management and 
production issues for the following reasons:
 # YARN-3.x has supported runtime environment isolation through docker. If too 
much consideration is given to supporting dynamic installation dependencies on 
a base image, then the meaning of docker is lost.
 # Submarine need to provide a docker-hub repository for the submarine project, 
which provides several stable versions of the tensorflow version, plus a full 
python library as the base image, which is sufficient for the user to use.
Because ordinary users are generally able to use the off-the-shelf docker image 
to complete their work, it is satisfied.
If they are advanced users, they will customize it using our base image.

So I think that the production and management of the image is handled by the 
user and does not need to be included in the submarine project. Submarine need 
more focus on the high performance, stability and ease of use of the submarine.)

> [Submarine] Support users to specify Python/TF package/version/dependencies 
> for training job.
> -
>
> Key: YARN-8563
> URL: https://issues.apache.org/jira/browse/YARN-8563
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Major
>
> YARN-8561 assumes all Python / Tensorflow dependencies will be packed to 
> docker image. In practice, user doesn't want to build docker image. Instead, 
> user can provide python package / dependencies (like .whl), Python and TF 
> version. And Submarine can localize specified dependencies to prebuilt base 
> Docker images.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8563) [Submarine] Support users to specify Python/TF package/version/dependencies for training job.

2018-09-21 Thread Xun Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623762#comment-16623762
 ] 

Xun Liu commented on YARN-8563:
---

I don't think submarine needs to care about image management and production 
issues for the following reasons:
 # YARN-3.x has supported runtime environment isolation through docker. If too 
much consideration is given to supporting dynamic installation dependencies on 
a base image, then the meaning of docker is lost.
 # Submarine need to provide a docker-hub repository for the submarine project, 
which provides several stable versions of the tensorflow version, plus a full 
python library as the base image, which is sufficient for the user to use.
Because ordinary users are generally able to use the off-the-shelf docker image 
to complete their work, it is satisfied.
If they are advanced users, they will customize it using our base image.

So I think that the production and management of the image is handled by the 
user and does not need to be included in the submarine project. Submarine need 
more focus on the high performance, stability and ease of use of the submarine.

> [Submarine] Support users to specify Python/TF package/version/dependencies 
> for training job.
> -
>
> Key: YARN-8563
> URL: https://issues.apache.org/jira/browse/YARN-8563
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Priority: Major
>
> YARN-8561 assumes all Python / Tensorflow dependencies will be packed to 
> docker image. In practice, user doesn't want to build docker image. Instead, 
> user can provide python package / dependencies (like .whl), Python and TF 
> version. And Submarine can localize specified dependencies to prebuilt base 
> Docker images.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8804) resourceLimits may be wrongly calculated when leaf-queue is blocked in cluster with 3+ level queues

2018-09-21 Thread Jason Lowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-8804:
-
Target Version/s: 2.10.0, 3.2.0, 2.9.2, 3.0.4, 3.1.2, 2.8.6

Thanks for updating the patch!  This is a good performance improvement.  
However I still think having the allocation directly track the amount relevant 
to an allocation blocked by queue limits would be cleaner.  It would remove the 
need to do RTTI on child queues.

But that's a much bigger change, and I'm OK with this approach for now.

Patch does not apply to trunk and needs to be rebased.  After doing so, please 
move the JIRA to Patch Available so Jenkins can comment on it.


> resourceLimits may be wrongly calculated when leaf-queue is blocked in 
> cluster with 3+ level queues
> ---
>
> Key: YARN-8804
> URL: https://issues.apache.org/jira/browse/YARN-8804
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.2.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Critical
> Attachments: YARN-8804.001.patch, YARN-8804.002.patch
>
>
> This problem is due to YARN-4280, parent queue will deduct child queue's 
> headroom when the child queue reached its resource limit and the skipped type 
> is QUEUE_LIMIT, the resource limits of deepest parent queue will be correctly 
> calculated, but for non-deepest parent queue, its headroom may be much more 
> than the sum of reached-limit child queues' headroom, so that the resource 
> limit of non-deepest parent may be much less than its true value and block 
> the allocation for later queues.
> To reproduce this problem with UT:
>  (1) Cluster has two nodes whose node resource both are <10GB, 10core> and 
> 3-level queues as below, among them max-capacity of "c1" is 10 and others are 
> all 100, so that max-capacity of queue "c1" is <2GB, 2core>
> {noformat}
>   Root
>  /  |  \
> a   bc
>10   20   70
>  |   \
> c1   c2
>   10(max=10) 90
> {noformat}
> (2) Submit app1 to queue "c1" and launch am1(resource=<1GB, 1 core>) on nm1
>  (3) Submit app2 to queue "b" and launch am2(resource=<1GB, 1 core>) on nm1
>  (4) app1 and app2 both ask one <2GB, 1core> containers. 
>  (5) nm1 do 1 heartbeat
>  Now queue "c" has lower capacity percentage than queue "b", the allocation 
> sequence will be "a" -> "c" -> "b",
>  queue "c1" has reached queue limit so that requests of app1 should be 
> pending, 
>  headroom of queue "c1" is <1GB, 1core> (=max-capacity - used), 
>  headroom of queue "c" is <18GB, 18core> (=max-capacity - used), 
>  after allocation for queue "c", resource limit of queue "b" will be wrongly 
> calculated as <2GB, 2core>,
>  headroom of queue "b" will be <1GB, 1core> (=resource-limit - used)
>  so that scheduler won't allocate one container for app2 on nm1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-09-21 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623675#comment-16623675
 ] 

Eric Badger commented on YARN-8777:
---

Ok I see what you're saying. I'm fine with the modifications to come from the 
java side when they write out the .cmd file. Probably makes more sense to do 
the heavy lifting on the java side instead of in the container-executor anyway. 
My concern was with the launch-command coming from the user and then running 
into the same serialization problem as is noted in YARN-8805. But when that 
comes along, we can do the same thing and do the string modifications in java 
land before we write out the .cmd file.

> Container Executor C binary change to execute interactive docker command
> 
>
> Key: YARN-8777
> URL: https://issues.apache.org/jira/browse/YARN-8777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8777.001.patch, YARN-8777.002.patch, 
> YARN-8777.003.patch
>
>
> Since Container Executor provides Container execution using the native 
> container-executor binary, we also need to make changes to accept new 
> “dockerExec” method to invoke the corresponding native function to execute 
> docker exec command to the running container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-09-21 Thread Simon Prewo (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623648#comment-16623648
 ] 

Simon Prewo commented on YARN-8785:
---

[~Zian Chen] Thanks for your support - feel free to review and give feedback ;)

> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8785) Error Message "Invalid docker rw mount" not helpful

2018-09-21 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623622#comment-16623622
 ] 

ASF GitHub Bot commented on YARN-8785:
--

GitHub user simonprewo opened a pull request:

https://github.com/apache/hadoop/pull/417

YARN-8785 Error Message "Invalid docker rw mount" not helpful

https://jira.apache.org/jira/browse/YARN-8785

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/simonprewo/hadoop patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/417.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #417


commit 06196db3c91e57558ef1c6d80078a96b39c1eb89
Author: Simon Prewo 
Date:   2018-09-21T13:32:25Z

YARN-8785 Error Message "Invalid docker rw mount" not helpful




> Error Message "Invalid docker rw mount" not helpful
> ---
>
> Key: YARN-8785
> URL: https://issues.apache.org/jira/browse/YARN-8785
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.9.1, 3.1.1
>Reporter: Simon Prewo
>Assignee: Simon Prewo
>Priority: Major
>  Labels: Docker
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> A user recieves the error message _Invalid docker rw mount_ when a container 
> tries to mount a directory which is not configured in property  
> *docker.allowed.rw-mounts*. 
> {code:java}
> Invalid docker rw mount 
> '/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01:/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01',
>  
> realpath=/usr/local/hadoop/logs/userlogs/application_1536476159258_0004/container_1536476159258_0004_02_01{code}
> The error message makes the user think "It is not possible due to a docker 
> issue". My suggestion would be to put there a message like *Configuration of 
> the container executor does not allow mounting directory.*.
> hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
> CURRENT:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Invalid docker mount '%s', realpath=%s\n", 
> values[i], mount_src);
> ...
> {code}
> NEW:
> {code:java}
> permitted_rw = check_mount_permitted((const char **) permitted_rw_mounts, 
> mount_src);
> permitted_ro = check_mount_permitted((const char **) permitted_ro_mounts, 
> mount_src);
> if (permitted_ro == -1 || permitted_rw == -1) {
>   fprintf(ERRORFILE, "Configuration of the container executor does not 
> allow mounting directory '%s', realpath=%s\n", values[i], mount_src);
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8806) Enable local staging directory and clean it up when submarine job is submitted

2018-09-21 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623619#comment-16623619
 ] 

Zac Zhou commented on YARN-8806:


[~tangzhankun] Thanks a lot for your suggestion. :)

I thought your idea before. But I'm not sure that azkaban, zeppelin and other 
systems would submit a subamrine job in a new process or in their own processes.

If a new process should be created as a recommended method, we should use the 
way you suggested ~ 

> Enable local staging directory and clean it up when submarine job is submitted
> --
>
> Key: YARN-8806
> URL: https://issues.apache.org/jira/browse/YARN-8806
> Project: Hadoop YARN
>  Issue Type: Sub-task
> Environment: In the /tmp dir, there are launch scripts which are not 
> cleaned up as follows:
> -rw-r--r-- 1 hadoop netease 1100 Sep 18 10:46 
> PRIMARY_WORKER-launch-script8635233314077649086.sh
> -rw-r--r-- 1 hadoop netease 1100 Sep 18 10:46 
> WORKER-launch-script129488020578466938.sh
> -rw-r--r-- 1 hadoop netease 1028 Sep 18 10:46 
> PS-launch-script471092031021738136.sh
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-8806.001.patch, YARN-8806.002.patch, 
> YARN-8806.003.patch
>
>
> YarnServiceJobSubmitter.generateCommandLaunchScript creates container launch 
> scripts in the local filesystem.  Container launch scripts would be uploaded 
> to hdfs staging dir, but would not be not deleted after the job is submitted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8759) Copy of "resource-types.xml" is not deleted if test fails, causes other test failures

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623565#comment-16623565
 ] 

Hadoop QA commented on YARN-8759:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
13s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 74m 56s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  2m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}168m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8759 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940755/YARN-8759.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e0bb74f244a2 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 524f7cd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (YARN-8806) Enable local staging directory and clean it up when submarine job is submitted

2018-09-21 Thread Zhankun Tang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623494#comment-16623494
 ] 

Zhankun Tang commented on YARN-8806:


[~yuan_zac] Thanks for providing the patch. I also noticed this problem before. 
Here is my solution for your reference, we can also use tempfile.deleteOnExit() 
to delete it when JVM exit(except crash)

> Enable local staging directory and clean it up when submarine job is submitted
> --
>
> Key: YARN-8806
> URL: https://issues.apache.org/jira/browse/YARN-8806
> Project: Hadoop YARN
>  Issue Type: Sub-task
> Environment: In the /tmp dir, there are launch scripts which are not 
> cleaned up as follows:
> -rw-r--r-- 1 hadoop netease 1100 Sep 18 10:46 
> PRIMARY_WORKER-launch-script8635233314077649086.sh
> -rw-r--r-- 1 hadoop netease 1100 Sep 18 10:46 
> WORKER-launch-script129488020578466938.sh
> -rw-r--r-- 1 hadoop netease 1028 Sep 18 10:46 
> PS-launch-script471092031021738136.sh
>Reporter: Zac Zhou
>Assignee: Zac Zhou
>Priority: Major
> Attachments: YARN-8806.001.patch, YARN-8806.002.patch, 
> YARN-8806.003.patch
>
>
> YarnServiceJobSubmitter.generateCommandLaunchScript creates container launch 
> scripts in the local filesystem.  Container launch scripts would be uploaded 
> to hdfs staging dir, but would not be not deleted after the job is submitted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8812) Containers fail during creating a symlink which started with hyphen for a resource file

2018-09-21 Thread Oleksandr Shevchenko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623439#comment-16623439
 ] 

Oleksandr Shevchenko edited comment on YARN-8812 at 9/21/18 11:48 AM:
--

TestNMProxy#testNMProxyRPCRetry failed by a different cause. This test failed 
in trunk without the patch. UnknownHostException was thrown instead of expected 
SocketException.


was (Author: oshevchenko):
Test Proxy#testNMProxyRPCRetry failed by a different cause. This test failed in 
trunk without the patch. UnknownHostException was thrown instead of expected 
SocketException.

> Containers fail during creating a symlink which started with hyphen for a 
> resource file
> ---
>
> Key: YARN-8812
> URL: https://issues.apache.org/jira/browse/YARN-8812
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Oleksandr Shevchenko
>Assignee: Oleksandr Shevchenko
>Priority: Minor
> Attachments: YARN-8812.001.patch
>
>
> When we run some job and add a file with alias started with hyphen then a 
> container fails during creating a symlink for a resource file:
> {noformat}
> yarn jar hadoop-mapreduce-examples.jar pi -files testfile#-symlink  1 1
> {noformat}
> or add a file to distributed cache in MR job by "job.addCacheFile"
> Containers fail if resource file has a symlink started with hyphen with the 
> following error:
> {noformat}
> Stack trace: ExitCodeException exitCode=1: 
> /tmp/hadoop-yarn/nm-local-dir/usercache/yarn/appcache/application_1537449069809_0022/container_e01_1537449069809_0022_02_01/launch_container.sh
> ln: invalid option -- 'y'
> Try 'ln --help' for more information.
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:572)
> at org.apache.hadoop.util.Shell.run(Shell.java:466)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:768)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:306)
> {noformat}
> The main cause of the problem is "launch_container.sh" script whitch contain 
> the following command for creating a symlink:
> {noformat}
> ln -sf "/tmp/hadoop-yarn/nm-local-dir/usercache/yarn/filecache/49/testfile" 
> "-symlink"
> {noformat}
> As the result "-symlink" parse as "-s" flag but not as a symlink name.
> The same job successfully passed when running on MRv1 but not on YARN since 
> symlinks create in different ways. Unix systems support names which start 
> with the hyphen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8812) Containers fail during creating a symlink which started with hyphen for a resource file

2018-09-21 Thread Oleksandr Shevchenko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623439#comment-16623439
 ] 

Oleksandr Shevchenko edited comment on YARN-8812 at 9/21/18 11:38 AM:
--

Test Proxy#testNMProxyRPCRetry failed by a different cause. This test failed in 
trunk without the patch. UnknownHostException was thrown instead of expected 
SocketException.


was (Author: oshevchenko):
Test Proxy#testNMProxyRPCRetry failed by a different cause. This test failed in 
trunk without the patch.

> Containers fail during creating a symlink which started with hyphen for a 
> resource file
> ---
>
> Key: YARN-8812
> URL: https://issues.apache.org/jira/browse/YARN-8812
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Oleksandr Shevchenko
>Assignee: Oleksandr Shevchenko
>Priority: Minor
> Attachments: YARN-8812.001.patch
>
>
> When we run some job and add a file with alias started with hyphen then a 
> container fails during creating a symlink for a resource file:
> {noformat}
> yarn jar hadoop-mapreduce-examples.jar pi -files testfile#-symlink  1 1
> {noformat}
> or add a file to distributed cache in MR job by "job.addCacheFile"
> Containers fail if resource file has a symlink started with hyphen with the 
> following error:
> {noformat}
> Stack trace: ExitCodeException exitCode=1: 
> /tmp/hadoop-yarn/nm-local-dir/usercache/yarn/appcache/application_1537449069809_0022/container_e01_1537449069809_0022_02_01/launch_container.sh
> ln: invalid option -- 'y'
> Try 'ln --help' for more information.
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:572)
> at org.apache.hadoop.util.Shell.run(Shell.java:466)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:768)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:306)
> {noformat}
> The main cause of the problem is "launch_container.sh" script whitch contain 
> the following command for creating a symlink:
> {noformat}
> ln -sf "/tmp/hadoop-yarn/nm-local-dir/usercache/yarn/filecache/49/testfile" 
> "-symlink"
> {noformat}
> As the result "-symlink" parse as "-s" flag but not as a symlink name.
> The same job successfully passed when running on MRv1 but not on YARN since 
> symlinks create in different ways. Unix systems support names which start 
> with the hyphen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8812) Containers fail during creating a symlink which started with hyphen for a resource file

2018-09-21 Thread Oleksandr Shevchenko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623439#comment-16623439
 ] 

Oleksandr Shevchenko commented on YARN-8812:


Test Proxy#testNMProxyRPCRetry failed by a different cause. This test failed in 
trunk without the patch.

> Containers fail during creating a symlink which started with hyphen for a 
> resource file
> ---
>
> Key: YARN-8812
> URL: https://issues.apache.org/jira/browse/YARN-8812
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Oleksandr Shevchenko
>Assignee: Oleksandr Shevchenko
>Priority: Minor
> Attachments: YARN-8812.001.patch
>
>
> When we run some job and add a file with alias started with hyphen then a 
> container fails during creating a symlink for a resource file:
> {noformat}
> yarn jar hadoop-mapreduce-examples.jar pi -files testfile#-symlink  1 1
> {noformat}
> or add a file to distributed cache in MR job by "job.addCacheFile"
> Containers fail if resource file has a symlink started with hyphen with the 
> following error:
> {noformat}
> Stack trace: ExitCodeException exitCode=1: 
> /tmp/hadoop-yarn/nm-local-dir/usercache/yarn/appcache/application_1537449069809_0022/container_e01_1537449069809_0022_02_01/launch_container.sh
> ln: invalid option -- 'y'
> Try 'ln --help' for more information.
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:572)
> at org.apache.hadoop.util.Shell.run(Shell.java:466)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:768)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:306)
> {noformat}
> The main cause of the problem is "launch_container.sh" script whitch contain 
> the following command for creating a symlink:
> {noformat}
> ln -sf "/tmp/hadoop-yarn/nm-local-dir/usercache/yarn/filecache/49/testfile" 
> "-symlink"
> {noformat}
> As the result "-symlink" parse as "-s" flag but not as a symlink name.
> The same job successfully passed when running on MRv1 but not on YARN since 
> symlinks create in different ways. Unix systems support names which start 
> with the hyphen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8468) Enable the use of queue based maximum container allocation limit and implement it in FairScheduler

2018-09-21 Thread Weiwei Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623426#comment-16623426
 ] 

Weiwei Yang commented on YARN-8468:
---

Thanks [~bsteinbach] for the updates.

One thing I am not sure, as {{ApplicationMasterProtocol}} is the only contract 
we have for requests, isn't enough to just do the normalization in AMS 
processor's allocate path? Why we need to do that again in scheduler? I know 
this is not introduced by this patch, but just want to understand the idea of 
normalization and see if it is possible to avoid changing the scheduler API 
{{YarnScheduler}}.

And now it seems not very consistent in FS and CS

CS doesn't normalize with max allocation at queue level
{code:java}
public Allocation allocate(...) {
   
   // Sanity check for new allocation requests
   normalizeResourceRequests(ask);
   // Normalize scheduling requests
   normalizeSchedulingRequests(schedulingRequests);
   ...
}{code}
 But in FS it does,
{code:java}
public Allocation allocate() {
    
    // Sanity check
    normalizeResourceRequests(ask, queue.getName());
    
}{code}
am I missing anything [~leftnoteasy], [~bsteinbach]?

And some other minor comments:

1) TestApplicationMasterServiceWithFS
{code:java}
String TEST_DIR = new File(System.getProperty("test.build.data", 
"/tmp")).getAbsolutePath();
{code}
can be replaced by
{code:java}
GenericTestUtils.getTestDir("xxx");
{code}
also pls remember to cleanup the test dir in teardown. This comment applies to 
{{TestAppManagerWithFairScheduler}}

2) Instead of pulling {{TestRMAppManager}} out as a separate class, is it 
better to create a class {{AppManagerTestBase}}, and move that into it as a 
inner class. Then let {{TestAppManager}} and 
{{TestAppManagerWithFairScheduler}} both extend {{AppManagerTestBase.}} 
{{TestRMAppManager}} sounds like a test class too much.

Please take one more look at the checkstyle issue, and fix as best as you can. 
e.g I see some unused imports in e.g {{TestRMAppManager}} and 
{{TestAppManager}}.

Thanks.

> Enable the use of queue based maximum container allocation limit and 
> implement it in FairScheduler
> --
>
> Key: YARN-8468
> URL: https://issues.apache.org/jira/browse/YARN-8468
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Critical
> Attachments: YARN-8468.000.patch, YARN-8468.001.patch, 
> YARN-8468.002.patch, YARN-8468.003.patch, YARN-8468.004.patch, 
> YARN-8468.005.patch, YARN-8468.006.patch, YARN-8468.007.patch, 
> YARN-8468.008.patch, YARN-8468.009.patch, YARN-8468.010.patch, 
> YARN-8468.011.patch, YARN-8468.012.patch, YARN-8468.013.patch, 
> YARN-8468.014.patch, YARN-8468.015.patch, YARN-8468.016.patch
>
>
> When using any scheduler, you can use "yarn.scheduler.maximum-allocation-mb" 
> to limit the overall size of a container. This applies globally to all 
> containers and cannot be limited by queue or and is not scheduler dependent.
> The goal of this ticket is to allow this value to be set on a per queue basis.
> The use case: User has two pools, one for ad hoc jobs and one for enterprise 
> apps. User wants to limit ad hoc jobs to small containers but allow 
> enterprise apps to request as many resources as needed. Setting 
> yarn.scheduler.maximum-allocation-mb sets a default value for maximum 
> container size for all queues and setting maximum resources per queue with 
> “maxContainerResources” queue config value.
> Suggested solution:
> All the infrastructure is already in the code. We need to do the following:
>  * add the setting to the queue properties for all queue types (parent and 
> leaf), this will cover dynamically created queues.
>  * if we set it on the root we override the scheduler setting and we should 
> not allow that.
>  * make sure that queue resource cap can not be larger than scheduler max 
> resource cap in the config.
>  * implement getMaximumResourceCapability(String queueName) in the 
> FairScheduler
>  * implement getMaximumResourceCapability(String queueName) in both 
> FSParentQueue and FSLeafQueue as follows
>  * expose the setting in the queue information in the RM web UI.
>  * expose the setting in the metrics etc for the queue.
>  * Enforce the use of queue based maximum allocation limit if it is 
> available, if not use the general scheduler level setting
>  ** Use it during validation and normalization of requests in 
> scheduler.allocate, app submit and resource request



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (YARN-8628) [UI2] Few duplicated or inconsistent information displayed in UI2

2018-09-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623416#comment-16623416
 ] 

Hudson commented on YARN-8628:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15034 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15034/])
YARN-8628. [UI2] Few duplicated or inconsistent information displayed in 
(sunilg: rev a2752779ac1545f5e0a52fce3cff02a7007e95fb)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-component-instances/info.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app/components.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-component-instance.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-component-instance/info.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-timeline-container.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-component-instance/info.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-component-instance/info.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-service-component.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/configs.hbs


> [UI2] Few duplicated or inconsistent information displayed in UI2
> -
>
> Key: YARN-8628
> URL: https://issues.apache.org/jira/browse/YARN-8628
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8628.001.patch
>
>
> 1. Irrespective of whichever component-instance that we click on, it always 
> lands on the component-instance detail page which has the first container. It 
> should take us to the component instance page of the corresponding container.
> 2. Exit Status Code in Component Instance Information detail page says 0, but 
> says N/A in Containers Grid View page.
> 3. Host URL and IP Address are N/A in Component Instance Information detail 
> page, but has valid values in Containers Grid View page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8812) Containers fail during creating a symlink which started with hyphen for a resource file

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623412#comment-16623412
 ] 

Hadoop QA commented on YARN-8812:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 143 unchanged - 1 fixed = 145 total (was 144) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 25s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestNMProxy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8812 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940754/YARN-8812.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4d4e4f1d146a 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 524f7cd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/21925/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 

[jira] [Comment Edited] (YARN-7957) [UI2] Yarn service delete option disappears after stopping application

2018-09-21 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623380#comment-16623380
 ] 

Akhil PB edited comment on YARN-7957 at 9/21/18 10:29 AM:
--

Below is the updated service states from the latest trunk code.
{code:java}
public enum ServiceState {
  ACCEPTED, STARTED, STABLE, STOPPED, FAILED, FLEX, UPGRADING,
  UPGRADING_AUTO_FINALIZE, EXPRESS_UPGRADING, SUCCEEDED;
}
{code}
So for a deployed service,
1. Show both Stop and Delete button when states are
{code:java}
ACCEPTED, STARTED, STABLE, FLEX, UPGRADING, UPGRADING_AUTO_FINALIZE, 
EXPRESS_UPGRADING
{code}
2. Show Delete button only when state is
{code:java}
STOPPED/SUCCEEDED
{code}
3. Do not show any buttons when state is 
{code:java}
FAILED
{code}
OR if the response code is 404 error.

cc [~sunilg] [~gsaha] Please share your thoughts.


was (Author: akhilpb):
Below is the updated service states from the latest trunk code.
{code:java}
public enum ServiceState {
  ACCEPTED, STARTED, STABLE, STOPPED, FAILED, FLEX, UPGRADING,
  UPGRADING_AUTO_FINALIZE, EXPRESS_UPGRADING, SUCCEEDED;
}
{code}
So for a deployed service,
1. Show both Stop and Delete button when states are
{code:java}
ACCEPTED, STARTED, STABLE, FLEX, UPGRADING, UPGRADING_AUTO_FINALIZE, 
EXPRESS_UPGRADING
{code}
2. Show Delete button only when state is
{code:java}
STOPPED, SUCCEEDED
{code}
3. Do not show any buttons when state is 
{code:java}
FAILED
{code}
OR if the response code is 404 error.

cc [~sunilg] [~gsaha] Please share your thoughts.

> [UI2] Yarn service delete option disappears after stopping application
> --
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Akhil PB
>Priority: Critical
> Attachments: YARN-7957.001.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7957) [UI2] Yarn service delete option disappears after stopping application

2018-09-21 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623380#comment-16623380
 ] 

Akhil PB edited comment on YARN-7957 at 9/21/18 10:29 AM:
--

Below is the updated service states from the latest trunk code.
{code:java}
public enum ServiceState {
  ACCEPTED, STARTED, STABLE, STOPPED, FAILED, FLEX, UPGRADING,
  UPGRADING_AUTO_FINALIZE, EXPRESS_UPGRADING, SUCCEEDED;
}
{code}
So for a deployed service,
1. Show both Stop and Delete button when states are
{code:java}
ACCEPTED, STARTED, STABLE, FLEX, UPGRADING, UPGRADING_AUTO_FINALIZE, 
EXPRESS_UPGRADING
{code}
2. Show Delete button only when state is
{code:java}
STOPPED, SUCCEEDED
{code}
3. Do not show any buttons when state is 
{code:java}
FAILED
{code}
OR if the response code is 404 error.

cc [~sunilg] [~gsaha] Please share your thoughts.


was (Author: akhilpb):
Hi [~sunilg]

Below is the updated service states from the latest trunk code.
{code:java}
public enum ServiceState {
  ACCEPTED, STARTED, STABLE, STOPPED, FAILED, FLEX, UPGRADING,
  UPGRADING_AUTO_FINALIZE, EXPRESS_UPGRADING, SUCCEEDED;
}
{code}
So for a deployed service,
1. Show both Stop and Delete button when states are
{code:java}
ACCEPTED, STARTED, STABLE, FLEX, UPGRADING, UPGRADING_AUTO_FINALIZE, 
EXPRESS_UPGRADING
{code}
2. Show Delete button only when state is
{code:java}
STOPPED, SUCCEEDED
{code}
3. Do not show any buttons when state is 
{code:java}
FAILED
{code}
OR if the response code is 404 error.

Please share your thoughts.

> [UI2] Yarn service delete option disappears after stopping application
> --
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Akhil PB
>Priority: Critical
> Attachments: YARN-7957.001.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7957) [UI2] Yarn service delete option disappears after stopping application

2018-09-21 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623380#comment-16623380
 ] 

Akhil PB edited comment on YARN-7957 at 9/21/18 10:26 AM:
--

Hi [~sunilg]

Below is the updated service states from the latest trunk code.
{code:java}
public enum ServiceState {
  ACCEPTED, STARTED, STABLE, STOPPED, FAILED, FLEX, UPGRADING,
  UPGRADING_AUTO_FINALIZE, EXPRESS_UPGRADING, SUCCEEDED;
}
{code}
So for a deployed service,
1. Show both Stop and Delete button when states are
{code:java}
ACCEPTED, STARTED, STABLE, FLEX, UPGRADING, UPGRADING_AUTO_FINALIZE, 
EXPRESS_UPGRADING
{code}
2. Show Delete button only when state is
{code:java}
STOPPED, SUCCEEDED
{code}
3. Do not show any buttons when state is 
{code:java}
FAILED
{code}
OR if the response code is 404 error.

Please share your thoughts.


was (Author: akhilpb):
Hi [~sunilg]

Below is the updated service states.
{code:java}
public enum ServiceState {
  ACCEPTED, STARTED, STABLE, STOPPED, FAILED, FLEX, UPGRADING,
  UPGRADING_AUTO_FINALIZE, EXPRESS_UPGRADING, SUCCEEDED;
}
{code}
So for a deployed service,
1. Show both Stop and Delete button when states are
{code:java}
ACCEPTED, STARTED, STABLE, FLEX, UPGRADING, UPGRADING_AUTO_FINALIZE, 
EXPRESS_UPGRADING
{code}
2. Show Delete button only when state is
{code:java}
STOPPED, SUCCEEDED
{code}
3. Do not show any buttons when state is 
{code:java}
FAILED
{code}
OR if the response code is 404 error.

Please share your thoughts.

> [UI2] Yarn service delete option disappears after stopping application
> --
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Akhil PB
>Priority: Critical
> Attachments: YARN-7957.001.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7957) [UI2] Yarn service delete option disappears after stopping application

2018-09-21 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623380#comment-16623380
 ] 

Akhil PB edited comment on YARN-7957 at 9/21/18 10:25 AM:
--

Hi [~sunilg]

Below is the updated service states.
{code:java}
public enum ServiceState {
  ACCEPTED, STARTED, STABLE, STOPPED, FAILED, FLEX, UPGRADING,
  UPGRADING_AUTO_FINALIZE, EXPRESS_UPGRADING, SUCCEEDED;
}
{code}
So for a deployed service,
1. Show both Stop and Delete button when states are
{code:java}
ACCEPTED, STARTED, STABLE, FLEX, UPGRADING, UPGRADING_AUTO_FINALIZE, 
EXPRESS_UPGRADING
{code}
2. Show Delete button only when state is
{code:java}
STOPPED, SUCCEEDED
{code}
3. Do not show any buttons when state is 
{code:java}
FAILED
{code}
OR if the response code is 404 error.

Please share your thoughts.


was (Author: akhilpb):
Hi [~sunilg]

Below is the updated service states.
{code:java}
public enum ServiceState {
  ACCEPTED, STARTED, STABLE, STOPPED, FAILED, FLEX, UPGRADING,
  UPGRADING_AUTO_FINALIZE, EXPRESS_UPGRADING, SUCCEEDED;
}
{code}
So for a deployed service,
1. Show both Stop and Delete button when states are
{code:java}
ACCEPTED, STARTED, STABLE, FLEX, UPGRADING, UPGRADING_AUTO_FINALIZE, 
EXPRESS_UPGRADING
{code}
2. Show Delete button only when state is
{code:java}
STOPPED, SUCCEEDED
{code}
3. Do not show any buttons when state is 
{code:java}
FAILED
{code}
OR if response code is 404 error.

Please share your thoughts.

> [UI2] Yarn service delete option disappears after stopping application
> --
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Akhil PB
>Priority: Critical
> Attachments: YARN-7957.001.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7957) [UI2] Yarn service delete option disappears after stopping application

2018-09-21 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623380#comment-16623380
 ] 

Akhil PB commented on YARN-7957:


Hi [~sunilg]

Below is the updated service states.
{code:java}
public enum ServiceState {
  ACCEPTED, STARTED, STABLE, STOPPED, FAILED, FLEX, UPGRADING,
  UPGRADING_AUTO_FINALIZE, EXPRESS_UPGRADING, SUCCEEDED;
}
{code}
So for a deployed service,
1. Show both Stop and Delete button when states are
{code:java}
ACCEPTED, STARTED, STABLE, FLEX, UPGRADING, UPGRADING_AUTO_FINALIZE, 
EXPRESS_UPGRADING
{code}
2. Show Delete button only when state is
{code:java}
STOPPED, SUCCEEDED
{code}
3. Do not show any buttons when state is 
{code:java}
FAILED
{code}
OR if response code is 404 error.

Please share your thoughts.

> [UI2] Yarn service delete option disappears after stopping application
> --
>
> Key: YARN-7957
> URL: https://issues.apache.org/jira/browse/YARN-7957
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.1.0
>Reporter: Yesha Vora
>Assignee: Akhil PB
>Priority: Critical
> Attachments: YARN-7957.001.patch
>
>
> Steps:
> 1) Launch yarn service
> 2) Go to service page and click on Setting button->"Stop Service". The 
> application will be stopped.
> 3) Refresh page
> Here, setting button disappears. Thus, user can not delete service from UI 
> after stopping application
> Expected behavior:
> Setting button should be present on UI page after application is stopped. If 
> application is stopped, setting button should only have "Delete Service" 
> action available.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8759) Copy of "resource-types.xml" is not deleted if test fails, causes other test failures

2018-09-21 Thread Manikandan R (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623356#comment-16623356
 ] 

Manikandan R commented on YARN-8759:


{quote}If you think it is confusing I can simply remove that line. It is not 
important to have it there.{quote}

Yes, that's precisely the point. Thanks for the explanation and patch. LGTM.

> Copy of "resource-types.xml" is not deleted if test fails, causes other test 
> failures
> -
>
> Key: YARN-8759
> URL: https://issues.apache.org/jira/browse/YARN-8759
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Major
> Attachments: YARN-8759.001.patch, YARN-8759.002.patch, 
> YARN-8759.003.patch, YARN-8759.004.patch
>
>
> resource-types.xml is copied in several tests to the test machine, but it is 
> deleted only at the end of the test. In case the test fails the file will not 
> be deleted and other tests will fail, because of the wrong configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8759) Copy of "resource-types.xml" is not deleted if test fails, causes other test failures

2018-09-21 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623329#comment-16623329
 ] 

Antal Bálint Steinbach edited comment on YARN-8759 at 9/21/18 9:31 AM:
---

Hi [~maniraj...@gmail.com] ,

Unfortunately no or at least I would not change, they are slightly different.

TestRMAdminCLI does a setup with @Before annotation before every test method 
call and initializes the resource file. This setup is required for every test 
in the test class.

TestClientRMService uses the resource file only in on test, so it is not 
required to do an initialization before every other test method run.

TestResourceUtils uses the resource file in every test, but it uses different 
files in different methods, so the test methods do the initialization of the 
resource file.

The common thing is that we need to do a teardown (delete the created file) 
after every test method call that is why we should have a resource file as a 
field. Setting it to null does make sense only in TestClientRMService because 
there only one test method uses it.

If you think it is confusing I can simply remove that line. It is not important 
to have it there. (new patch uploaded)

 


was (Author: bsteinbach):
Hi [~maniraj...@gmail.com] ,

Unfortunately no or at least I would not change, they are slightly different.

TestRMAdminCLI does a setup with @Before annotation before every test method 
call and initializes the resource file. This setup is required for every test 
in the test class.

TestClientRMService uses the resource file only in on test, so it is not 
required to do an initialization before every other test method run.

TestResourceUtils uses the resource file in every test, but it uses different 
files in different methods, so the test methods do the initialization of the 
resource file.

The common thing is that we need to do a teardown (delete the created file) 
after every test method call that is why we should have a resource file as a 
field. Setting it to null does make sense only in TestClientRMService because 
there only one test method uses it.

If you think it is confusing I can simply remove that line. It is not important 
to have it there.

 

> Copy of "resource-types.xml" is not deleted if test fails, causes other test 
> failures
> -
>
> Key: YARN-8759
> URL: https://issues.apache.org/jira/browse/YARN-8759
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Major
> Attachments: YARN-8759.001.patch, YARN-8759.002.patch, 
> YARN-8759.003.patch, YARN-8759.004.patch
>
>
> resource-types.xml is copied in several tests to the test machine, but it is 
> deleted only at the end of the test. In case the test fails the file will not 
> be deleted and other tests will fail, because of the wrong configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8759) Copy of "resource-types.xml" is not deleted if test fails, causes other test failures

2018-09-21 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/YARN-8759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antal Bálint Steinbach updated YARN-8759:
-
Attachment: YARN-8759.004.patch

> Copy of "resource-types.xml" is not deleted if test fails, causes other test 
> failures
> -
>
> Key: YARN-8759
> URL: https://issues.apache.org/jira/browse/YARN-8759
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Major
> Attachments: YARN-8759.001.patch, YARN-8759.002.patch, 
> YARN-8759.003.patch, YARN-8759.004.patch
>
>
> resource-types.xml is copied in several tests to the test machine, but it is 
> deleted only at the end of the test. In case the test fails the file will not 
> be deleted and other tests will fail, because of the wrong configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8759) Copy of "resource-types.xml" is not deleted if test fails, causes other test failures

2018-09-21 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623329#comment-16623329
 ] 

Antal Bálint Steinbach commented on YARN-8759:
--

Hi [~maniraj...@gmail.com] ,

Unfortunately no or at least I would not change, they are slightly different.

TestRMAdminCLI does a setup with @Before annotation before every test method 
call and initializes the resource file. This setup is required for every test 
in the test class.

TestClientRMService uses the resource file only in on test, so it is not 
required to do an initialization before every other test method run.

TestResourceUtils uses the resource file in every test, but it uses different 
files in different methods, so the test methods do the initialization of the 
resource file.

The common thing is that we need to do a teardown (delete the created file) 
after every test method call that is why we should have a resource file as a 
field. Setting it to null does make sense only in TestClientRMService because 
there only one test method uses it.

If you think it is confusing I can simply remove that line. It is not important 
to have it there.

 

> Copy of "resource-types.xml" is not deleted if test fails, causes other test 
> failures
> -
>
> Key: YARN-8759
> URL: https://issues.apache.org/jira/browse/YARN-8759
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Major
> Attachments: YARN-8759.001.patch, YARN-8759.002.patch, 
> YARN-8759.003.patch
>
>
> resource-types.xml is copied in several tests to the test machine, but it is 
> deleted only at the end of the test. In case the test fails the file will not 
> be deleted and other tests will fail, because of the wrong configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8812) Containers fail during creating a symlink which started with hyphen for a resource file

2018-09-21 Thread Oleksandr Shevchenko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623324#comment-16623324
 ] 

Oleksandr Shevchenko commented on YARN-8812:


Changed generating symlink creating command by adding "--" flag to separate 
flags of the command and path parameters.
The command looks like now:
{code}
ln -sf -- "/tmp/hadoop-yarn/nm-local-dir/usercache/yarn/filecache/49/testfile" 
"-symlink"
{code}
Could someone review the patch?
Thanks!

> Containers fail during creating a symlink which started with hyphen for a 
> resource file
> ---
>
> Key: YARN-8812
> URL: https://issues.apache.org/jira/browse/YARN-8812
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Oleksandr Shevchenko
>Assignee: Oleksandr Shevchenko
>Priority: Minor
> Attachments: YARN-8812.001.patch
>
>
> When we run some job and add a file with alias started with hyphen then a 
> container fails during creating a symlink for a resource file:
> {noformat}
> yarn jar hadoop-mapreduce-examples.jar pi -files testfile#-symlink  1 1
> {noformat}
> or add a file to distributed cache in MR job by "job.addCacheFile"
> Containers fail if resource file has a symlink started with hyphen with the 
> following error:
> {noformat}
> Stack trace: ExitCodeException exitCode=1: 
> /tmp/hadoop-yarn/nm-local-dir/usercache/yarn/appcache/application_1537449069809_0022/container_e01_1537449069809_0022_02_01/launch_container.sh
> ln: invalid option -- 'y'
> Try 'ln --help' for more information.
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:572)
> at org.apache.hadoop.util.Shell.run(Shell.java:466)
> at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:768)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:306)
> {noformat}
> The main cause of the problem is "launch_container.sh" script whitch contain 
> the following command for creating a symlink:
> {noformat}
> ln -sf "/tmp/hadoop-yarn/nm-local-dir/usercache/yarn/filecache/49/testfile" 
> "-symlink"
> {noformat}
> As the result "-symlink" parse as "-s" flag but not as a symlink name.
> The same job successfully passed when running on MRv1 but not on YARN since 
> symlinks create in different ways. Unix systems support names which start 
> with the hyphen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7599) [GPG] ApplicationCleaner in Global Policy Generator

2018-09-21 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623321#comment-16623321
 ] 

Bibin A Chundatt commented on YARN-7599:


Thank you [~botong] 

Over all patch looks good to me..

{quote}
Can you change to single configuration similar to 
dfs.http.client.retry.policy.spec {min,max,interval}
I already changed the new configs to something like 
application.cleaner.router.min.success. This is what you meant right?
{quote}
What i had in mind, was  to merge all 3 to single property. 
*dfs.http.client.retry.policy.spec* does that. 

> [GPG] ApplicationCleaner in Global Policy Generator
> ---
>
> Key: YARN-7599
> URL: https://issues.apache.org/jira/browse/YARN-7599
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>  Labels: federation, gpg
> Attachments: YARN-7599-YARN-7402.v1.patch, 
> YARN-7599-YARN-7402.v2.patch, YARN-7599-YARN-7402.v3.patch, 
> YARN-7599-YARN-7402.v4.patch, YARN-7599-YARN-7402.v5.patch, 
> YARN-7599-YARN-7402.v6.patch, YARN-7599-YARN-7402.v7.patch
>
>
> In Federation, we need a cleanup service for StateStore as well as Yarn 
> Registry. For the former, we need to remove old application records. For the 
> latter, failed and killed applications might leave records in the Yarn 
> Registry (see YARN-6128). We plan to do both cleanup work in 
> ApplicationCleaner in GPG



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8812) Containers fail during creating a symlink which started with hyphen for a resource file

2018-09-21 Thread Oleksandr Shevchenko (JIRA)
Oleksandr Shevchenko created YARN-8812:
--

 Summary: Containers fail during creating a symlink which started 
with hyphen for a resource file
 Key: YARN-8812
 URL: https://issues.apache.org/jira/browse/YARN-8812
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Oleksandr Shevchenko
Assignee: Oleksandr Shevchenko


When we run some job and add a file with alias started with hyphen then a 
container fails during creating a symlink for a resource file:

{noformat}
yarn jar hadoop-mapreduce-examples.jar pi -files testfile#-symlink  1 1
{noformat}
or add a file to distributed cache in MR job by "job.addCacheFile"


Containers fail if resource file has a symlink started with hyphen with the 
following error:
{noformat}

Stack trace: ExitCodeException exitCode=1: 
/tmp/hadoop-yarn/nm-local-dir/usercache/yarn/appcache/application_1537449069809_0022/container_e01_1537449069809_0022_02_01/launch_container.sh
ln: invalid option -- 'y'
Try 'ln --help' for more information.

at org.apache.hadoop.util.Shell.runCommand(Shell.java:572)
at org.apache.hadoop.util.Shell.run(Shell.java:466)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:768)
at 
org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:306)
{noformat}

The main cause of the problem is "launch_container.sh" script whitch contain 
the following command for creating a symlink:
{noformat}
ln -sf "/tmp/hadoop-yarn/nm-local-dir/usercache/yarn/filecache/49/testfile" 
"-symlink"
{noformat}
As the result "-symlink" parse as "-s" flag but not as a symlink name.

The same job successfully passed when running on MRv1 but not on YARN since 
symlinks create in different ways. Unix systems support names which start with 
the hyphen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2599) Standby RM should also expose some jmx and metrics

2018-09-21 Thread Tao Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-2599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623315#comment-16623315
 ] 

Tao Yang commented on YARN-2599:


Hi, [~Naganarasimha]. Some JVM metrics of standby RM can be evaluated whether 
standby RM is normal and can be transitioned to active. We have a requirement 
to monitor some metrics, can we revaluate whether standby RM should expose jmx ?

> Standby RM should also expose some jmx and metrics
> --
>
> Key: YARN-2599
> URL: https://issues.apache.org/jira/browse/YARN-2599
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.5.1, 2.7.3, 3.0.0-alpha1
>Reporter: Karthik Kambatla
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-2599.patch
>
>
> YARN-1898 redirects jmx and metrics to the Active. As discussed there, we 
> need to separate out metrics displayed so the Standby RM can also be 
> monitored. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8806) Enable local staging directory and clean it up when submarine job is submitted

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623301#comment-16623301
 ] 

Hadoop QA commented on YARN-8806:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
34s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine 
in trunk has 2 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine: 
The patch generated 5 new + 45 unchanged - 0 fixed = 50 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8806 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940743/YARN-8806.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 76c5d1d11a11 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 524f7cd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/21924/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-submarine-warnings.html
 |
| checkstyle | 

[jira] [Commented] (YARN-8794) QueuePlacementPolicy add more rules

2018-09-21 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16623249#comment-16623249
 ] 

Hadoop QA commented on YARN-8794:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 38s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 280 unchanged - 18 fixed = 283 total (was 298) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 38s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 72m 57s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}127m 42s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8794 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12940733/YARN-8794.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 343531037f52 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 524f7cd |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/21923/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

  1   2   >