[jira] [Commented] (YARN-9189) Clarify FairScheduler submission logging

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898558#comment-16898558
 ] 

Hadoop QA commented on YARN-9189:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-9189 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9189 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954608/YARN-9189.5.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24451/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Clarify FairScheduler submission logging
> 
>
> Key: YARN-9189
> URL: https://issues.apache.org/jira/browse/YARN-9189
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.2.0
>Reporter: Patrick Bayne
>Assignee: Patrick Bayne
>Priority: Minor
> Attachments: YARN-9189.1.patch, YARN-9189.2.patch, YARN-9189.3.patch, 
> YARN-9189.4.patch, YARN-9189.5.patch
>
>
> Logging was ambiguous for the fairscheduler. It was unclear if the "total 
> number applications" was referring to the global total or the queue's total. 
> Fixed wording/spelling of output logging. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9377) docker builds having problems with main/native/container-executor/test/

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898557#comment-16898557
 ] 

Hadoop QA commented on YARN-9377:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-9377 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9377 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961813/HADOOP-16123-002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24450/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> docker builds having problems with main/native/container-executor/test/
> ---
>
> Key: YARN-9377
> URL: https://issues.apache.org/jira/browse/YARN-9377
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 3.3.0
> Environment: docker
>Reporter: lqjacklee
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-16123-001.patch, HADOOP-16123-002.patch
>
>
> During build the source code , do the steps as below : 
>  
> 1, run docker daemon 
> 2, ./start-build-env.sh
> 3, sudo mvn clean install -DskipTests -Pnative 
> the response prompt that : 
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
> 'protoc --version' did not return a version -> 
> [Help 1]
> However , when execute the command : whereis protoc 
> liu@a65d187055f9:~/hadoop$ whereis protoc
> protoc: /opt/protobuf/bin/protoc
>  
> the PATH value : 
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin
>  
> liu@a65d187055f9:~/hadoop$ protoc --version
> libprotoc 2.5.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9591) JAR archive expansion doesn't faithfully restore the jar file content

2019-08-01 Thread Terence Yim (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898553#comment-16898553
 ] 

Terence Yim commented on YARN-9591:
---

Hi,

The fixes is provided in GitHub PR for 2 months. Is there any update?

Thanks,
Terence

> JAR archive expansion doesn't faithfully restore the jar file content
> -
>
> Key: YARN-9591
> URL: https://issues.apache.org/jira/browse/YARN-9591
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.0
>Reporter: Terence Yim
>Priority: Major
>
> Due to change in YARN-2185, jar archives are unjar using {{JarInputStream}}, 
> which will skip writing out the {{META-INF/MANIFEST.MF}} file. 
> Before the YARN-2185 change, it was unarchive using {{JarFile}} instead, and 
> the {{META-INF/MANIFEST.MF}} file was written out correctly.
> This change of behavior is causing some applications to fail.
> Suggested to change to use {{ZipInputStream}} instead to faithfully restoring 
> all entries inside a jar file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9310) Test submarine maven module build

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898549#comment-16898549
 ] 

Hadoop QA commented on YARN-9310:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-9310 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-9310 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12958980/YARN-9310.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24448/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Test submarine maven module build
> -
>
> Key: YARN-9310
> URL: https://issues.apache.org/jira/browse/YARN-9310
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Tan, Wangda
>Assignee: Tan, Wangda
>Priority: Major
> Attachments: YARN-9310.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9479) Change String.equals to Objects.equals(String,String) to avoid possible NullPointerException

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898537#comment-16898537
 ] 

Hadoop QA commented on YARN-9479:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
50s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 14 unchanged - 0 fixed = 16 total (was 14) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 56s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 14s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-738/5/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/738 |
| JIRA Issue | YARN-9479 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 3c1e7b580419 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 

[jira] [Commented] (YARN-9376) too many ContainerIdComparator instances are not necessary

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898535#comment-16898535
 ] 

Hadoop QA commented on YARN-9376:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 43s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 79m 
23s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | YARN-9376 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12961967/YARN-9376.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux df8ec6309c60 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e20b195 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24447/testReport/ |
| Max. process+thread count | 896 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24447/console |
| Powered by | Apache Yetus 0.8.0   

[jira] [Commented] (YARN-9509) Capped cpu usage with cgroup strict-resource-usage based on a mulitplier

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898511#comment-16898511
 ] 

Hadoop QA commented on YARN-9509:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
22s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  8s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 219 unchanged - 0 fixed = 224 total (was 219) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
51s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
29s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}112m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 

[jira] [Commented] (YARN-9009) Fix flaky test TestEntityGroupFSTimelineStore.testCleanLogs

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898505#comment-16898505
 ] 

Hadoop QA commented on YARN-9009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} https://github.com/apache/hadoop/pull/438 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/438 |
| JIRA Issue | YARN-9009 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-438/3/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Fix flaky test TestEntityGroupFSTimelineStore.testCleanLogs
> ---
>
> Key: YARN-9009
> URL: https://issues.apache.org/jira/browse/YARN-9009
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: Ubuntu 18.04
> java version "1.8.0_181"
> Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
>  
> Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 
> 2018-06-17T13:33:14-05:00)
>Reporter: OrDTesters
>Assignee: OrDTesters
>Priority: Minor
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-9009-trunk-001.patch
>
>
> In TestEntityGroupFSTimelineStore, testCleanLogs fails when run after 
> testMoveToDone.
> testCleanLogs fails because testMoveToDone moves a file into the same 
> directory that testCleanLogs cleans, causing testCleanLogs to clean 3 files, 
> instead of 2 as testCleanLogs expects.
> To fix the failure of testCleanLogs, we can delete the file after the file is 
> moved by testMoveToDone.
> Pull request link: [https://github.com/apache/hadoop/pull/438]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9468) Fix inaccurate documentations in Placement Constraints

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898488#comment-16898488
 ] 

Hadoop QA commented on YARN-9468:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 49s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-717/3/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/717 |
| JIRA Issue | YARN-9468 |
| Optional Tests | dupname asflicense mvnsite |
| uname | Linux 56fa216f37f9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / e20b195 |
| Max. process+thread count | 412 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-717/3/console |
| versions | git=2.7.4 maven=3.3.9 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Fix inaccurate documentations in Placement Constraints
> --
>
> Key: YARN-9468
> URL: https://issues.apache.org/jira/browse/YARN-9468
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
>
> Document Placement Constraints
> *First* 
> {code:java}
> zk=3,NOTIN,NODE,zk:hbase=5,IN,RACK,zk:spark=7,CARDINALITY,NODE,hbase,1,3{code}
>  * place 5 containers with tag “hbase” with affinity to a rack on which 
> containers with tag “zk” are running (i.e., an “hbase” container 
> should{color:#ff} not{color} be placed at a rack where an “zk” container 
> is running, given that “zk” is the TargetTag of the second constraint);
> The _*not*_ word in brackets should be delete.
>  
> *Second*
> {code:java}
> PlacementSpec => "" | KeyVal;PlacementSpec
> {code}
> The semicolon should be replaced by colon
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9579) the property of sharedcache in mapred-default.xml

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898479#comment-16898479
 ] 

Hadoop QA commented on YARN-9579:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
26s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-848/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/848 |
| JIRA Issue | YARN-9579 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml |
| uname | Linux e192de421835 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / e20b195 |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-848/4/testReport/ |
| Max. process+thread count | 1678 (vs. ulimit of 5500) |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-848/4/console |
| versions | git=2.7.4 maven=3.3.9 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> the property of sharedcache in mapred-default.xml
> -
>
> Key: YARN-9579
> URL: https://issues.apache.org/jira/browse/YARN-9579
> Project: Hadoop YARN

[jira] [Commented] (YARN-9601) Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898472#comment-16898472
 ] 

Hadoop QA commented on YARN-9601:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
10s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  5s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
39s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-908/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/908 |
| JIRA Issue | YARN-9601 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 780c49295f4d 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / e20b195 |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-908/4/testReport/ |
| Max. process+thread count | 447 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Updated] (YARN-1655) Add implementations to FairScheduler to support increase/decrease container resource

2019-08-01 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-1655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-1655:

Attachment: YARN-1655.005.patch

> Add implementations to FairScheduler to support increase/decrease container 
> resource
> 
>
> Key: YARN-1655
> URL: https://issues.apache.org/jira/browse/YARN-1655
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-1655.001.patch, YARN-1655.002.patch, 
> YARN-1655.003.patch, YARN-1655.004.patch, YARN-1655.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1655) Add implementations to FairScheduler to support increase/decrease container resource

2019-08-01 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-1655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898466#comment-16898466
 ] 

Wilfred Spiegelenburg commented on YARN-1655:
-

the failed unit test is flaky as per YARN-8433 and not related to the change.

I fixed up the checkstyle issues. Most of the change in the RMContainerImpl is 
a layout change to clean up the incorrect state machine layout. 
[^YARN-1655.005.patch] 

> Add implementations to FairScheduler to support increase/decrease container 
> resource
> 
>
> Key: YARN-1655
> URL: https://issues.apache.org/jira/browse/YARN-1655
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-1655.001.patch, YARN-1655.002.patch, 
> YARN-1655.003.patch, YARN-1655.004.patch, YARN-1655.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9384) FederationInterceptorREST should broadcast KillApplication to all sub clusters

2019-08-01 Thread Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898426#comment-16898426
 ] 

Zhang commented on YARN-9384:
-

[~qiuliang988], No perfect solution. Now KillApplication can only kill 
containers in home subcluster immediately. Containers from secondary 
subclusters can continue running for like 10minutes until application in 
secondary subclusters become "FAILED".

> FederationInterceptorREST should broadcast KillApplication to all sub clusters
> --
>
> Key: YARN-9384
> URL: https://issues.apache.org/jira/browse/YARN-9384
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Reporter: Zhang
>Assignee: Zhang
>Priority: Major
>  Labels: federation
> Attachments: YARN-9384.2.patch, YARN-9384.patch
>
>
> Today KillApplication request from user client only goes to home cluster. As 
> a result the containers in secondary clusters continue processing for 10~15 
> minutes (UAM heartbeat timeout).
> This is not an favorable user experience especially when user has a streaming 
> job and sometime needs to restart it (like updating config or resources). In 
> this case, containers created by new job and old job can run at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9564) Create docker-to-squash tool for image conversion

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898398#comment-16898398
 ] 

Hadoop QA commented on YARN-9564:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange}  0m  
8s{color} | {color:orange} The patch generated 1090 new + 0 unchanged - 0 fixed 
= 1090 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-assemblies in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
42s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}101m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | YARN-9564 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976467/YARN-9564.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  pylint  |
| uname | Linux f24d1fdcd0ae 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e111789 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| pylint | v1.9.2 |
| pylint | 
https://builds.apache.org/job/PreCommit-YARN-Build/24445/artifact/out/diff-patch-pylint.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24445/testReport/ |
| asflicense | 

[jira] [Commented] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898395#comment-16898395
 ] 

Hadoop QA commented on YARN-9718:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 47s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
36s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-9718 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976468/YARN-9718.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 86dae23cebd2 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / e111789 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24446/testReport/ |
| Max. process+thread count | 739 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24446/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Yarn REST API, services 

[jira] [Assigned] (YARN-9330) Add support to query scheduler endpoint filtered via queue (/scheduler/queue=abc)

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9330:
-

Assignee: Prashant Golash

> Add support to query scheduler endpoint filtered via queue 
> (/scheduler/queue=abc)
> -
>
> Key: YARN-9330
> URL: https://issues.apache.org/jira/browse/YARN-9330
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: webapp
>Affects Versions: 3.1.2
>Reporter: Prashant Golash
>Assignee: Prashant Golash
>Priority: Minor
>  Labels: newbie, patch
> Attachments: YARN-9330.001.patch, YARN-9330.002.patch, 
> YARN-9330.003.patch, YARN-9330.004.patch
>
>
> Currently, the endpoint */ws/v1/cluster/scheduler * returns all the 
> queues as part of rest contract.
> The intention of the JIRA is to be able to pass additional queue PathParam to 
> just return that queue. For e.g.
> */ws/v1/cluster/scheduler/queue=testParentQueue*
> */ws/v1/cluster/scheduler/queue=testChildQueue*
> This will make it easy for Rest clients to query just for the desired queue 
> and parse from the response.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9377) docker builds having problems with main/native/container-executor/test/

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9377:
-

Assignee: lqjacklee

> docker builds having problems with main/native/container-executor/test/
> ---
>
> Key: YARN-9377
> URL: https://issues.apache.org/jira/browse/YARN-9377
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: build, test
>Affects Versions: 3.3.0
> Environment: docker
>Reporter: lqjacklee
>Assignee: lqjacklee
>Priority: Minor
> Attachments: HADOOP-16123-001.patch, HADOOP-16123-002.patch
>
>
> During build the source code , do the steps as below : 
>  
> 1, run docker daemon 
> 2, ./start-build-env.sh
> 3, sudo mvn clean install -DskipTests -Pnative 
> the response prompt that : 
> [ERROR] Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT:protoc (compile-protoc) 
> on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
> 'protoc --version' did not return a version -> 
> [Help 1]
> However , when execute the command : whereis protoc 
> liu@a65d187055f9:~/hadoop$ whereis protoc
> protoc: /opt/protobuf/bin/protoc
>  
> the PATH value : 
> /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/cmake/bin:/opt/protobuf/bin
>  
> liu@a65d187055f9:~/hadoop$ protoc --version
> libprotoc 2.5.0
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9256) Make ATSv2 compilation default with hbase.profile=2.0

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9256:
-

Assignee: Rohith Sharma K S

> Make ATSv2 compilation default with hbase.profile=2.0
> -
>
> Key: YARN-9256
> URL: https://issues.apache.org/jira/browse/YARN-9256
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Attachments: YARN-9256.01.patch
>
>
> By default Hadoop compiles with hbase.profile one which corresponds to 
> hbase.version=1.4 for ATSv2. Change compilation to hbase.profile=2.0 by 
> default in trunk. 
> This JIRA is to discuss for any concerns. 
> cc:/ [~vrushalic]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9310) Test submarine maven module build

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9310:
-

Assignee: Tan, Wangda

> Test submarine maven module build
> -
>
> Key: YARN-9310
> URL: https://issues.apache.org/jira/browse/YARN-9310
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Tan, Wangda
>Assignee: Tan, Wangda
>Priority: Major
> Attachments: YARN-9310.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-9189) Clarify FairScheduler submission logging

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898374#comment-16898374
 ] 

Wei-Chiu Chuang edited comment on YARN-9189 at 8/1/19 9:33 PM:
---

+1 but patch doesn't apply


was (Author: jojochuang):
+1

> Clarify FairScheduler submission logging
> 
>
> Key: YARN-9189
> URL: https://issues.apache.org/jira/browse/YARN-9189
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.2.0
>Reporter: Patrick Bayne
>Assignee: Patrick Bayne
>Priority: Minor
> Attachments: YARN-9189.1.patch, YARN-9189.2.patch, YARN-9189.3.patch, 
> YARN-9189.4.patch, YARN-9189.5.patch
>
>
> Logging was ambiguous for the fairscheduler. It was unclear if the "total 
> number applications" was referring to the global total or the queue's total. 
> Fixed wording/spelling of output logging. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9678) TestGpuResourceHandler / TestFpgaResourceHandler should be renamed

2019-08-01 Thread kevin su (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898375#comment-16898375
 ] 

kevin su commented on YARN-9678:


[~snemeth] I would like to do it, could you assign this to me

> TestGpuResourceHandler / TestFpgaResourceHandler should be renamed
> --
>
> Key: YARN-9678
> URL: https://issues.apache.org/jira/browse/YARN-9678
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Priority: Major
>  Labels: newbie, newbie++
>
> Their respective production classes are GpuResourceHandlerImpl and 
> FpgaResourceHandlerImpl so we are missing the "Impl" from the testcase 
> classnames.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9189) Clarify FairScheduler submission logging

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9189:
-

Assignee: Patrick Bayne

> Clarify FairScheduler submission logging
> 
>
> Key: YARN-9189
> URL: https://issues.apache.org/jira/browse/YARN-9189
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.2.0
>Reporter: Patrick Bayne
>Assignee: Patrick Bayne
>Priority: Minor
> Attachments: YARN-9189.1.patch, YARN-9189.2.patch, YARN-9189.3.patch, 
> YARN-9189.4.patch, YARN-9189.5.patch
>
>
> Logging was ambiguous for the fairscheduler. It was unclear if the "total 
> number applications" was referring to the global total or the queue's total. 
> Fixed wording/spelling of output logging. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9189) Clarify FairScheduler submission logging

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898374#comment-16898374
 ] 

Wei-Chiu Chuang commented on YARN-9189:
---

+1

> Clarify FairScheduler submission logging
> 
>
> Key: YARN-9189
> URL: https://issues.apache.org/jira/browse/YARN-9189
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 3.2.0
>Reporter: Patrick Bayne
>Assignee: Patrick Bayne
>Priority: Minor
> Attachments: YARN-9189.1.patch, YARN-9189.2.patch, YARN-9189.3.patch, 
> YARN-9189.4.patch, YARN-9189.5.patch
>
>
> Logging was ambiguous for the fairscheduler. It was unclear if the "total 
> number applications" was referring to the global total or the queue's total. 
> Fixed wording/spelling of output logging. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9384) FederationInterceptorREST should broadcast KillApplication to all sub clusters

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9384:
-

Assignee: Zhang

> FederationInterceptorREST should broadcast KillApplication to all sub clusters
> --
>
> Key: YARN-9384
> URL: https://issues.apache.org/jira/browse/YARN-9384
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation
>Reporter: Zhang
>Assignee: Zhang
>Priority: Major
>  Labels: federation
> Attachments: YARN-9384.2.patch, YARN-9384.patch
>
>
> Today KillApplication request from user client only goes to home cluster. As 
> a result the containers in secondary clusters continue processing for 10~15 
> minutes (UAM heartbeat timeout).
> This is not an favorable user experience especially when user has a streaming 
> job and sometime needs to restart it (like updating config or resources). In 
> this case, containers created by new job and old job can run at the same time.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9376) too many ContainerIdComparator instances are not necessary

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9376:
-

Assignee: lindongdong

> too many ContainerIdComparator instances are not necessary
> --
>
> Key: YARN-9376
> URL: https://issues.apache.org/jira/browse/YARN-9376
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 3.1.1, 3.1.2
>Reporter: lindongdong
>Assignee: lindongdong
>Priority: Minor
> Attachments: YARN-9376.000.patch
>
>
>  One RMNodeImpl will create a new ContainerIdComparator instance, but it is 
> not necessary.
> we may keep a static ContainerIdComparator instance and it is enough.
> org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl#containersToClean
> {code:java}
> /* set of containers that need to be cleaned */
> private final Set containersToClean = new TreeSet(
> new ContainerIdComparator());
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9494) ApplicationHistoryServer endpoint access wrongly requested

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9494:
-

Assignee: Wang, Xinglong

> ApplicationHistoryServer endpoint access wrongly requested
> --
>
> Key: YARN-9494
> URL: https://issues.apache.org/jira/browse/YARN-9494
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Wang, Xinglong
>Assignee: Wang, Xinglong
>Priority: Minor
> Attachments: YARN-9494.001.patch, YARN-9494.002.patch
>
>
> With the following configuration, resource manager will redirect 
> https://resourcemanager.hadoop.com:50030/proxy/application_1553677175329_47053/
>  to  0.0.0.0:10200 when resource manager can't find 
> application_1553677175329_47053 in applicationManager.
> {code:java}
> yarn.timeline-service.enabled = false
> yarn.timeline-service.generic-application-history.enabled = true
> {code}
> However, in this case, there is no timeline service enabled, thus no 
> yarn.timeline-service.address defined, and 0.0.0.0:10200 will be used as 
> timelineserver access point.
> This combination of configuration is a valid configuration, due to we have in 
> house tool to analyze the generic-applicaiton-history files generated by 
> resource manager. While we don't enable timeline service.
> {code:java}
> HTTP ERROR 500
> Problem accessing /proxy/application_1553677175329_47053/. Reason:
> Call From x/10.22.59.23 to 0.0.0.0:10200 failed on connection 
> exception: java.net.ConnectException: Connection refused; For more details 
> see:  http://wiki.apache.org/hadoop/ConnectionRefused
> Caused by:
> java.net.ConnectException: Call From x/10.22.59.23 to 0.0.0.0:10200 
> failed on connection exception: java.net.ConnectException: Connection 
> refused; For more details see:  
> http://wiki.apache.org/hadoop/ConnectionRefused
>   at sun.reflect.GeneratedConstructorAccessor240.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:801)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:732)
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1558)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1498)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1398)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
>   at com.sun.proxy.$Proxy12.getApplicationReport(Unknown Source)
>   at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationHistoryProtocolPBClientImpl.getApplicationReport(ApplicationHistoryProtocolPBClientImpl.java:108)
>   at 
> org.apache.hadoop.yarn.server.webproxy.AppReportFetcher.getApplicationReport(AppReportFetcher.java:137)
>   at 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet.getApplicationReport(WebAppProxyServlet.java:251)
>   at 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet.getFetchedAppReport(WebAppProxyServlet.java:491)
>   at 
> org.apache.hadoop.yarn.server.webproxy.WebAppProxyServlet.doGet(WebAppProxyServlet.java:329)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:66)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:900)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebAppFilter.doFilter(RMWebAppFilter.java:178)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
>   at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
>   at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
>   at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
>   at 
> 

[jira] [Assigned] (YARN-9455) SchedulerInvalidResoureRequestException has a typo in its class (and file) name

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9455:
-

Assignee: Anh

> SchedulerInvalidResoureRequestException has a typo in its class (and file) 
> name
> ---
>
> Key: YARN-9455
> URL: https://issues.apache.org/jira/browse/YARN-9455
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Anh
>Priority: Major
>  Labels: newbie
>
> The class name should be: SchedulerInvalidResourceRequestException



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9517) When aggregation is not enabled, can't see the container log

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9517:
-

Assignee: Shurong Mai

> When aggregation is not enabled, can't see the container log
> 
>
> Key: YARN-9517
> URL: https://issues.apache.org/jira/browse/YARN-9517
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.2.0, 2.3.0, 2.4.1, 2.5.2, 2.6.5, 2.8.5, 2.7.7
>Reporter: Shurong Mai
>Assignee: Shurong Mai
>Priority: Major
>  Labels: patch
> Attachments: YARN-9517-branch-2.8.5.001.patch, YARN-9517.patch
>
>
> yarn-site.xml
> {code:java}
> 
> yarn.log-aggregation-enable
> false
> 
> {code}
>  
> When aggregation is not enabled, we click the "container log link"(in web 
> page 
> "http://xx:19888/jobhistory/attempts/job_1556431770792_0001/m/SUCCESSFUL;)
>  after a job is finished successfully.
> It will jump to the webpage displaying "Aggregation is not enabled. Try the 
> nodemanager at yy:48038" after we click, and the url is 
> "http://xx:19888/jobhistory/logs/yy:48038/container_1556431770792_0001_01_02/attempt_1556431770792_0001_m_00_0/hadoop;
> I also fund this problem in all hadoop version  2.x.y and 3.x.y and I submit 
> a patch which is  simple and can apply to this hadoop version.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9521) RM failed to start due to system services

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9521:
-

Assignee: kyungwan nam

> RM failed to start due to system services
> -
>
> Key: YARN-9521
> URL: https://issues.apache.org/jira/browse/YARN-9521
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.2
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-9521.001.patch, YARN-9521.002.patch
>
>
> when starting RM, listing system services directory has failed as follows.
> {code}
> 2019-04-30 17:18:25,441 INFO  client.SystemServiceManagerImpl 
> (SystemServiceManagerImpl.java:serviceInit(114)) - System Service Directory 
> is configured to /services
> 2019-04-30 17:18:25,467 INFO  client.SystemServiceManagerImpl 
> (SystemServiceManagerImpl.java:serviceInit(120)) - UserGroupInformation 
> initialized to yarn (auth:SIMPLE)
> 2019-04-30 17:18:25,467 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(267)) - Service ResourceManager failed in 
> state STARTED
> org.apache.hadoop.service.ServiceStateException: java.io.IOException: 
> Filesystem closed
> at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:203)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:869)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startActiveServices(ResourceManager.java:1228)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1269)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$1.run(ResourceManager.java:1265)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.transitionToActive(ResourceManager.java:1265)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1316)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1501)
> Caused by: java.io.IOException: Filesystem closed
> at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:473)
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1639)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1217)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1233)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$DirListingIterator.(DistributedFileSystem.java:1200)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1179)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem$26.doCall(DistributedFileSystem.java:1175)
> at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusIterator(DistributedFileSystem.java:1187)
> at 
> org.apache.hadoop.yarn.service.client.SystemServiceManagerImpl.list(SystemServiceManagerImpl.java:375)
> at 
> org.apache.hadoop.yarn.service.client.SystemServiceManagerImpl.scanForUserServices(SystemServiceManagerImpl.java:282)
> at 
> org.apache.hadoop.yarn.service.client.SystemServiceManagerImpl.serviceStart(SystemServiceManagerImpl.java:126)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
> ... 13 more
> {code}
> it looks like due to the usage of filesystem cache.
> this issue does not happen, when I add "fs.hdfs.impl.disable.cache=true" to 
> yarn-site



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9562) Add Java changes for the new RuncContainerRuntime

2019-08-01 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898350#comment-16898350
 ] 

Eric Badger commented on YARN-9562:
---

Hey [~eyang], thanks for trying out the patch! Here are what I believe are the 
bare minimum configs necessary to get things up and running. For now, the C 
code in YARN-9561 uses docker configs from container-executor.cfg, so the only 
thing that you will need to add to that is {{feature.oci.enabled=1}} and then 
of course make sure that the mount lists are in line with the mounts you set in 
yarn-site.xml

Additionally, I have uploaded the docker-to-squash tool to YARN-9564 which 
should help you create the squashFS images on HDFS. It will also set up the 
image-tag-to-hash file. See YARN-9564 for more details on that.

{noformat:title=Required Configs}
  
yarn.nodemanager.runtime.linux.allowed-runtimes
docker,default,runc
  

  
yarn.nodemanager.runtime.linux.runc.allowed-images
$Name of image tag
 

  
yarn.nodemanager.runtime.linux.runc.image-name
$Name of image tag
 

  

yarn.nodemanager.runtime.linux.runc.image-tag-to-manifest-plugin.image-toplevel-dir
/runc-root
  
{noformat}


{noformat:title=Doesn't need to be nscd, but you need some strategy to make 
sure that your username can be resolved in the container via its uid.}
 
   yarn.nodemanager.runtime.linux.runc.default-rw-mounts
   /var/run/nscd:/var/run/nscd
 
{noformat}

{noformat:title=At least 1 of the following 2 configs needs to be set. If you 
use the docker-to-squash tool from YARN 9564 then you should only need to set 
the hdfs hash file}
  

yarn.nodemanager.runtime.linux.runc.image-tag-to-manifest-plugin.local-hash-file
/home/ebadger/image-tag-to-hash
  

  

yarn.nodemanager.runtime.linux.runc.image-tag-to-manifest-plugin.hdfs-hash-file
/runc-root/image-tag-to-hash
  
{noformat}

> Add Java changes for the new RuncContainerRuntime
> -
>
> Key: YARN-9562
> URL: https://issues.apache.org/jira/browse/YARN-9562
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9562.001.patch, YARN-9562.002.patch
>
>
> This JIRA will be used to add the Java changes for the new 
> RuncContainerRuntime. This will work off of YARN-9560 to use much of the 
> existing DockerLinuxContainerRuntime code once it is moved up into an 
> abstract class that can be extended. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9564) Create docker-to-squash tool for image conversion

2019-08-01 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898345#comment-16898345
 ] 

Eric Badger commented on YARN-9564:
---

Once the tool completes, your hdfs-root should look something like this. There 
may be more or less layers depending on the image that you choose to build.

{noformat}
[ebadger@foo bin]$ hadoop fs -ls /runc-root/*
Found 1 items
-r--r--r--   1 ebadger supergroup   8038 2019-08-01 20:22 
/runc-root/config/599579330b2d6f01817b402be0cdc0af1a9b7b076f3743483c2c509b7f73c5ad
-r--r--r--   1 ebadger supergroup153 2019-08-01 20:22 
/runc-root/image-tag-to-hash
Found 10 items
-r--r--r--   1 ebadger supergroup 1143881728 2019-08-01 20:19 
/runc-root/layers/1b5ed3a410f4c2e8948816ceaae20aa2f2ba9aab682ca899f900fc4d43f6e98b.sqsh
-r--r--r--   1 ebadger supergroup   4096 2019-08-01 20:21 
/runc-root/layers/23c1e1af124ca00519aa2858e891a4f8c3a25e9350b39419b786b638f312f8d6.sqsh
-r--r--r--   1 ebadger supergroup   4096 2019-08-01 20:20 
/runc-root/layers/253aace22c99909557f14eebe73d5a56b3c155cc3ce1a941755d53fe10d53177.sqsh
-r--r--r--   1 ebadger supergroup  106979328 2019-08-01 20:20 
/runc-root/layers/50ede125b97f8bc1c4637b17c5d5fb93cd82088ad07d8143e07482cfbe025f6e.sqsh
-r--r--r--   1 ebadger supergroup   4096 2019-08-01 20:22 
/runc-root/layers/8227f11724713c930b1eed248657e725d200bf58cb2796cd1d2ed6404b49a5a9.sqsh
-r--r--r--   1 ebadger supergroup1748992 2019-08-01 20:19 
/runc-root/layers/ae4a0c3146659d5a83077c7143eb255c5115223ef346eb462124c9fe637d288a.sqsh
-r--r--r--   1 ebadger supergroup  28672 2019-08-01 20:19 
/runc-root/layers/b38724009ad1954978bb8c0943d6463bab47c7aab3f463085f7c0ba5fcf30145.sqsh
-r--r--r--   1 ebadger supergroup  360402944 2019-08-01 20:20 
/runc-root/layers/db1a08a14163561a7c7b78ecdc8a98211530b50a1a3eed9483bd556f688788f9.sqsh
-r--r--r--   1 ebadger supergroup   4096 2019-08-01 20:21 
/runc-root/layers/eb8f6b81108d2f3f5dd1de79805a34be5561ebb247aa8d4f87f096a9109335b9.sqsh
-r--r--r--   1 ebadger supergroup  114610176 2019-08-01 20:21 
/runc-root/layers/fb4e5c73119d8f54a37f12055ab9aa9d12e1e77c015742c86175a6a6d8b6e5fb.sqsh
Found 1 items
-r--r--r--   1 ebadger supergroup   2418 2019-08-01 20:22 
/runc-root/manifests/b71721ecf1e6cbbae071621b22f9003f6909b24bd883d804fef61ec2548bfda9
{noformat}

Your image-tag-to-hash file should look something like this:

{noformat}
[ebadger@foo bin]$ hadoop fs -cat /runc-root/image-tag-to-hash
busybox:latest:b71721ecf1e6cbbae071621b22f9003f6909b24bd883d804fef61ec2548bfda9#registry.hub.docker.com/library/busybox,busybox:latest
{noformat}

Note that the hashes used above do not actually reflect the exact hashes that 
busybox will get you. 

> Create docker-to-squash tool for image conversion
> -
>
> Key: YARN-9564
> URL: https://issues.apache.org/jira/browse/YARN-9564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9564.001.patch
>
>
> The new runc runtime uses docker images that are converted into multiple 
> squashfs images. Each layer of the docker image will get its own squashfs 
> image. We need a tool to help automate the creation of these squashfs images 
> when all we have is a docker image



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9564) Create docker-to-squash tool for image conversion

2019-08-01 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898342#comment-16898342
 ] 

Eric Badger commented on YARN-9564:
---

{noformat}
[ebadger@foo bin]$ ./docker-to-squash.py -h
usage: docker-to-squash.py [-h] [--working-dir WORKING_DIR]
   [--skopeo-format SKOPEO_FORMAT]
   [--pull-format PULL_FORMAT] [-l LOG_LEVEL]
   [--hdfs-root HDFS_ROOT]
   [--image-tag-to-hash IMAGE_TAG_TO_HASH]
   [-r REPLICATION] [--hadoop-prefix HADOOP_PREFIX]
   [-f] [--check-magic-file] [--magic-file MAGIC_FILE]

   
{pull-build-push-update,pull-build,push-update,remove-image,remove-tag,add-tag,copy-update,query-tag,list-tags}
   ...

positional arguments:
  
{pull-build-push-update,pull-build,push-update,remove-image,remove-tag,add-tag,copy-update,query-tag,list-tags}
sub help
pull-build-push-update
Pull an image, build its squashfs layers, push it to
hdfs, and atomically update the image-tag-to-hash file
pull-build  Pull an image and build its squashfs layers
push-update Push the squashfs layers to hdfs and update the image-
tag-to-hash file
remove-imageRemove an image (manifest, config, layers) from hdfs
based on its tag or manifest hash
remove-tag  Remove an image to tag mapping in the image-tag-to-
hash file
add-tag Add an image to tag mapping in the image-tag-to-hash
file
copy-update Copy an image from hdfs in one cluster to another and
then update the image-tag-to-hash file
query-tag   Get the manifest, config, and layers associated with a
tag
list-tags   List all tags in image-tag-to-hash file

optional arguments:
  -h, --helpshow this help message and exit
  --working-dir WORKING_DIR
Name of working directory
  --skopeo-format SKOPEO_FORMAT
Output format for skopeo copy
  --pull-format PULL_FORMAT
Pull format for skopeo
  -l LOG_LEVEL, --log LOG_LEVEL
Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
  --hdfs-root HDFS_ROOT
The root directory in HDFS for all of the squashfs
images
  --image-tag-to-hash IMAGE_TAG_TO_HASH
image-tag-to-hash filepath or filename in hdfs
  -r REPLICATION, --replication REPLICATION
Replication factor for all files uploaded to HDFS
  --hadoop-prefix HADOOP_PREFIX
hadoop_prefix value for environment
  -f, --force   Force overwrites in HDFS
  --check-magic-fileCheck for a specific magic file in the image before
uploading
  --magic-file MAGIC_FILE
The magic file to check for in the image
{noformat}

{noformat:Building and pushing a new image to HDFS}
./docker-to-squash.py --log=DEBUG pull-build-push-update ,
{noformat}

{noformat:title=Example}
./docker-to-squash.py --log=DEBUG pull-build-push-update 
registry.hub.docker.com/library/busybox,busybox:latest
{noformat}

Note that the busybox image won't be enough to run the runC containers, since 
it won't have Java, some native libs, and any other Hadoop things that are 
needed to start a container. So you should point this to a Docker image that 
you can run DockerLinuxContainerRuntime with.

> Create docker-to-squash tool for image conversion
> -
>
> Key: YARN-9564
> URL: https://issues.apache.org/jira/browse/YARN-9564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9564.001.patch
>
>
> The new runc runtime uses docker images that are converted into multiple 
> squashfs images. Each layer of the docker image will get its own squashfs 
> image. We need a tool to help automate the creation of these squashfs images 
> when all we have is a docker image



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898340#comment-16898340
 ] 

Eric Yang commented on YARN-9718:
-

Patch 002 fixed checkstyle and whitespace issues.

> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9718.001.patch, YARN-9718.002.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-01 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9718:

Attachment: YARN-9718.002.patch

> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1, 3.1.2
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9718.001.patch, YARN-9718.002.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9564) Create docker-to-squash tool for image conversion

2019-08-01 Thread Eric Badger (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated YARN-9564:
--
Attachment: YARN-9564.001.patch

> Create docker-to-squash tool for image conversion
> -
>
> Key: YARN-9564
> URL: https://issues.apache.org/jira/browse/YARN-9564
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: YARN-9564.001.patch
>
>
> The new runc runtime uses docker images that are converted into multiple 
> squashfs images. Each layer of the docker image will get its own squashfs 
> image. We need a tool to help automate the creation of these squashfs images 
> when all we have is a docker image



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898287#comment-16898287
 ] 

Hadoop QA commented on YARN-9718:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 4 new + 14 unchanged - 0 fixed = 18 total (was 14) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 50s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m 
55s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-9718 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976456/YARN-9718.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0d946e98 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a7371a7 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/2/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/2/artifact/out/whitespace-eol.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-8959) TestContainerResizing fails randomly

2019-08-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898236#comment-16898236
 ] 

Szilard Nemeth commented on YARN-8959:
--

[~BilwaST]: Can I take this over?

> TestContainerResizing fails randomly
> 
>
> Key: YARN-8959
> URL: https://issues.apache.org/jira/browse/YARN-8959
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bilwa S T
>Priority: Minor
>
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testSimpleDecreaseContainer
> {code}
> testSimpleDecreaseContainer(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing)
>   Time elapsed: 0.348 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1024> but was:<3072>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.checkUsedResource(TestContainerResizing.java:1011)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testSimpleDecreaseContainer(TestContainerResizing.java:210)
> {code}
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testIncreaseContainerUnreservedWhenContainerCompleted
> {code}
> testIncreaseContainerUnreservedWhenContainerCompleted(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing)
>   Time elapsed: 0.445 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1024> but was:<7168>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.checkUsedResource(TestContainerResizing.java:1011)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testIncreaseContainerUnreservedWhenContainerCompleted(TestContainerResizing.java:729)
> {code}
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testExcessiveReservationWhenDecreaseSameContainer
> {code}
> testExcessiveReservationWhenDecreaseSameContainer(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing)
>   Time elapsed: 0.321 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1024> but was:<2048>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.checkUsedResource(TestContainerResizing.java:1015)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testExcessiveReservationWhenDecreaseSameContainer(TestContainerResizing.java:623)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5684) testDecreaseAfterIncreaseWithAllocationExpiration fails intermittently

2019-08-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-5684?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-5684:


Assignee: Szilard Nemeth

> testDecreaseAfterIncreaseWithAllocationExpiration fails intermittently 
> ---
>
> Key: YARN-5684
> URL: https://issues.apache.org/jira/browse/YARN-5684
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Szilard Nemeth
>Priority: Major
>
> Saw the following in a precommit:
> {code}
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
> testDecreaseAfterIncreaseWithAllocationExpiration(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer)
>   Time elapsed: 10.726 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer.testDecreaseAfterIncreaseWithAllocationExpiration(TestIncreaseAllocationExpirer.java:367)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7387) org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer fails intermittently

2019-08-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-7387:


Assignee: Szilard Nemeth

> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
>  fails intermittently
> ---
>
> Key: YARN-7387
> URL: https://issues.apache.org/jira/browse/YARN-7387
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Szilard Nemeth
>Priority: Major
>
> {code}
> Tests run: 4, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 52.481 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
> testDecreaseAfterIncreaseWithAllocationExpiration(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer)
>   Time elapsed: 13.292 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3072> but was:<4096>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer.testDecreaseAfterIncreaseWithAllocationExpiration(TestIncreaseAllocationExpirer.java:459)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9690) Invalid AMRM token when distributed scheduling is enabled.

2019-08-01 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi reassigned YARN-9690:
---

Assignee: (was: Abhishek Modi)

> Invalid AMRM token when distributed scheduling is enabled.
> --
>
> Key: YARN-9690
> URL: https://issues.apache.org/jira/browse/YARN-9690
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: distributed-scheduling, yarn
>Affects Versions: 2.9.2, 3.1.2
> Environment: OS: Ubuntu 18.04
> JVM: 1.8.0_212-8u212-b03-0ubuntu1.18.04.1-b03
>Reporter: Babble Shack
>Priority: Major
> Attachments: applicationlog, distributed_log, ds_application.log, 
> image-2019-07-26-18-00-14-980.png, nodemanager-yarn-site.xml, 
> nodemanager.log, rm-yarn-site.xml, yarn-site.xml
>
>
> Applications fail to start due to invalild AMRM from application attempt.
> I have tested this with 0/100% opportunistic maps and the same issue occurs 
> regardless.
> {code:java}
> 
> -->
> 
>   
>     mapreduceyarn.nodemanager.aux-services
>     mapreduce_shuffle
>   
>   
>       yarn.resourcemanager.address
>       yarn-master-0.yarn-service.yarn:8032
>   
>   
>       yarn.resourcemanager.scheduler.address
>       0.0.0.0:8049
>   
>   
>     
> yarn.resourcemanager.opportunistic-container-allocation.enabled
>     true
>   
>   
>     yarn.nodemanager.opportunistic-containers-max-queue-length
>     10
>   
>   
>     yarn.nodemanager.distributed-scheduling.enabled
>     true
>   
>  
>   
>     yarn.webapp.ui2.enable
>     true
>   
>   
>       yarn.resourcemanager.resource-tracker.address
>       yarn-master-0.yarn-service.yarn:8031
>   
>   
>     yarn.log-aggregation-enable
>     true
>   
>   
>       yarn.nodemanager.aux-services
>       mapreduce_shuffle
>   
>   
>   
>   
>   
>     yarn.nodemanager.resource.memory-mb
>     7168
>   
>   
>     yarn.scheduler.minimum-allocation-mb
>     3584
>   
>   
>     yarn.scheduler.maximum-allocation-mb
>     7168
>   
>   
>     yarn.app.mapreduce.am.resource.mb
>     7168
>   
>   
>   
>     yarn.app.mapreduce.am.command-opts
>     -Xmx5734m
>   
>   
>   
>     yarn.timeline-service.enabled
>     true
>   
>   
>     yarn.resourcemanager.system-metrics-publisher.enabled
>     true
>   
>   
>     yarn.timeline-service.generic-application-history.enabled
>     true
>   
>   
>     yarn.timeline-service.bind-host
>     0.0.0.0
>   
> 
> {code}
> Relevant logs:
> {code:java}
> 2019-07-22 14:56:37,104 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: 100% of the 
> mappers will be scheduled using OPPORTUNISTIC containers
> 2019-07-22 14:56:37,117 INFO [main] org.apache.hadoop.yarn.client.RMProxy: 
> Connecting to ResourceManager at 
> yarn-master-0.yarn-service.yarn/10.244.1.134:8030
> 2019-07-22 14:56:37,150 WARN [main] org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.token.SecretManager$InvalidToken):
>  Invalid AMRMToken from appattempt_1563805140414_0002_02
> 2019-07-22 14:56:37,152 ERROR [main] 
> org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator: Exception while 
> registering
> org.apache.hadoop.security.token.SecretManager$InvalidToken: Invalid 
> AMRMToken from appattempt_1563805140414_0002_02
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>     at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>     at 
> org.apache.hadoop.yarn.ipc.RPCUtil.instantiateIOException(RPCUtil.java:80)
>     at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:119)
>     at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(ApplicationMasterProtocolPBClientImpl.java:109)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>     at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>     at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>     at 
> 

[jira] [Assigned] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-01 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-9718:
---

 Assignee: Eric Yang
Affects Version/s: 3.1.0
   3.2.0
   3.1.1
   3.1.2
 Target Version/s: 3.3.0, 3.2.1, 3.1.3

> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.2, 3.1.1, 3.2.0, 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-9718.001.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-01 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-9718:

Attachment: YARN-9718.001.patch

> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
> Attachments: YARN-9718.001.patch
>
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898220#comment-16898220
 ] 

Eric Yang commented on YARN-9718:
-

This issue has been classified as an unexpected product behavior but not a 
security hole.  Open this issue as Jira ticket to track the development 
progress.

> Yarn REST API, services endpoint remote command ejection
> 
>
> Key: YARN-9718
> URL: https://issues.apache.org/jira/browse/YARN-9718
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Eric Yang
>Priority: Major
>
> Email from Oskars Vegeris:
>  
> During internal infrastructure testing it was discovered that the Hadoop Yarn 
> REST endpoint /app/v1/services contains a command injection vulnerability.
>  
> The services endpoint's normal use-case is for launching containers (e.g. 
> Docker images/apps), however by providing an argument with special shell 
> characters it is possible to execute arbitrary commands on the Host server - 
> this would allow to escalate privileges and access. 
>  
> The command injection is possible in the parameter for JVM options - 
> "yarn.service.am.java.opts". It's possible to enter arbitrary shell commands 
> by using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
> performed. 
>  
> The "launch_command" which needs to be provided is meant for the container 
> and if it's not being run in privileged mode or with special options, host OS 
> should not be accessible.
>  
> I've attached a minimal request sample with an injected 'ping' command. The 
> endpoint can also be found via UI @ 
> [http://yarn-resource-manager:8088/ui2/#/yarn-services]
>  
> If no auth, or "simple auth" (username) is enabled, commands can be executed 
> on the host OS. I know commands can also be ran by the "new-application" 
> feature, however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9718) Yarn REST API, services endpoint remote command ejection

2019-08-01 Thread Eric Yang (JIRA)
Eric Yang created YARN-9718:
---

 Summary: Yarn REST API, services endpoint remote command ejection
 Key: YARN-9718
 URL: https://issues.apache.org/jira/browse/YARN-9718
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Eric Yang


Email from Oskars Vegeris:

 
During internal infrastructure testing it was discovered that the Hadoop Yarn 
REST endpoint /app/v1/services contains a command injection vulnerability.
 
The services endpoint's normal use-case is for launching containers (e.g. 
Docker images/apps), however by providing an argument with special shell 
characters it is possible to execute arbitrary commands on the Host server - 
this would allow to escalate privileges and access. 
 
The command injection is possible in the parameter for JVM options - 
"yarn.service.am.java.opts". It's possible to enter arbitrary shell commands by 
using sub-shell syntax `cmd` or $(cmd). No shell character filtering is 
performed. 
 
The "launch_command" which needs to be provided is meant for the container and 
if it's not being run in privileged mode or with special options, host OS 
should not be accessible.
 
I've attached a minimal request sample with an injected 'ping' command. The 
endpoint can also be found via UI @ 
[http://yarn-resource-manager:8088/ui2/#/yarn-services]
 
If no auth, or "simple auth" (username) is enabled, commands can be executed on 
the host OS. I know commands can also be ran by the "new-application" feature, 
however this is clearly not meant to be a way to touch the host OS.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9583) Failed job which is submitted unknown queue is showed all users

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898216#comment-16898216
 ] 

Wei-Chiu Chuang commented on YARN-9583:
---

[~Prabhu Joseph] [~sunilg] [~wangda] can you help with this review?

> Failed job which is submitted unknown queue is showed all users
> ---
>
> Key: YARN-9583
> URL: https://issues.apache.org/jira/browse/YARN-9583
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Major
> Attachments: YARN-9583-screenshot.png, YARN-9583.001.patch, 
> YARN-9583.002.patch, YARN-9583.003.patch, YARN-9583.004.patch, 
> YARN-9583.005.patch
>
>
> In secure mode, Failed job which is submitted unknown queue is showed all 
> users.
> I attached RM UI screen shot.
> reproduction senario
>1. user foo submit job to unknown queue without view-acl and job will fail 
> immediately. 
>2. user bar can access the job of user foo which previously failed.
> According to comments in  QueueACLsManager .java that caused the problem,
> This situation can happen when RM is restarted after deleting queue.
> I think  showing app of non existing queue to all users is the problem after 
> RM start. 
> It will become a security hole.
> I fixed it a little bit.  
> After fixing it, Both owner of job and admin of yarn can access job which is 
> submitted unknown queue. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9583) Failed job which is submitted unknown queue is showed all users

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9583:
-

Assignee: KWON BYUNGCHANG

> Failed job which is submitted unknown queue is showed all users
> ---
>
> Key: YARN-9583
> URL: https://issues.apache.org/jira/browse/YARN-9583
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Major
> Attachments: YARN-9583-screenshot.png, YARN-9583.001.patch, 
> YARN-9583.002.patch, YARN-9583.003.patch, YARN-9583.004.patch, 
> YARN-9583.005.patch
>
>
> In secure mode, Failed job which is submitted unknown queue is showed all 
> users.
> I attached RM UI screen shot.
> reproduction senario
>1. user foo submit job to unknown queue without view-acl and job will fail 
> immediately. 
>2. user bar can access the job of user foo which previously failed.
> According to comments in  QueueACLsManager .java that caused the problem,
> This situation can happen when RM is restarted after deleting queue.
> I think  showing app of non existing queue to all users is the problem after 
> RM start. 
> It will become a security hole.
> I fixed it a little bit.  
> After fixing it, Both owner of job and admin of yarn can access job which is 
> submitted unknown queue. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9605) Add ZkConfiguredFailoverProxyProvider for RM HA

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9605:
-

Assignee: zhoukang

> Add ZkConfiguredFailoverProxyProvider for RM HA
> ---
>
> Key: YARN-9605
> URL: https://issues.apache.org/jira/browse/YARN-9605
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: zhoukang
>Assignee: zhoukang
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-9605.001.patch
>
>
> In this issue, i will track a new feature to support 
> ZkConfiguredFailoverProxyProvider for RM HA



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-9647) Docker launch fails when local-dirs or log-dirs is unhealthy.

2019-08-01 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned YARN-9647:
-

Assignee: KWON BYUNGCHANG

> Docker launch fails when local-dirs or log-dirs is unhealthy.
> -
>
> Key: YARN-9647
> URL: https://issues.apache.org/jira/browse/YARN-9647
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.1.2
>Reporter: KWON BYUNGCHANG
>Assignee: KWON BYUNGCHANG
>Priority: Major
> Attachments: YARN-9647.001.patch, YARN-9647.002.patch
>
>
> my /etc/hadoop/conf/container-executor.cfg
> {code}
> [docker]
>docker.allowed.ro-mounts=/data1/hadoop/yarn/local,/data2/hadoop/yarn/local
>docker.allowed.rw-mounts=/data1/hadoop/yarn/local,/data2/hadoop/yarn/local
> {code}
> if /data2 is unhealthy, docker launch fails  although container can use 
> /data1 as local-dir, log-dir 
> error message is below
> {code}
> [2019-06-25 14:55:26.168]Exception from container-launch. Container id: 
> container_e50_1561100493387_5185_01_000597 Exit code: 29 Exception message: 
> Launch container failed Shell error output: Could not determine real path of 
> mount '/data2/hadoop/yarn/local' Could not determine real path of mount 
> '/data2/hadoop/yarn/local' Unable to find permitted docker mounts on disk 
> Error constructing docker command, docker error code=16, error message='Mount 
> access error' Shell output: main : command provided 4 main : run as user is 
> magnum main : requested yarn user is magnum Creating script paths... Creating 
> local dirs... [2019-06-25 14:55:26.189]Container exited with a non-zero exit 
> code 29. [2019-06-25 14:55:26.192]Container exited with a non-zero exit code 
> 29. 
> {code}
> root cause is that normalize_mounts() in docker-util.c return -1  because it 
> cannot resolve real path of /data2/hadoop/yarn/local.(note that /data2 is 
> disk fault  at this point)
> however disk of nm local dirs and nm log dirs can fail at any time.
> docker launch should succeed if there are available local dirs and log dirs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9717) Add more logging to container-executor about issues with directory creation or permissions

2019-08-01 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-9717:
-
Description: 
During some downstream testing we bumped into some problems with the container 
executor where an extra logging would be quite helpful when local files and 
directories could not be created (container-executor.c:1810).

The most important log line could be the following:
There's a function called create_container_directories in container-executor.c.
We should place a log line like this:

Before we're calling:

We have: 
{code:java}
if (mkdirs(container_dir, perms) == 0) {
  result = 0;
}
{code}
We could add an else statement and add the following log, if creating the 
directory was not successful: 

{code:java}
fprintf(LOGFILE, "Failed to create directory: %s, user: %s", container_dir, 
user);
{code}

This way, CE at least prints the directory itself if we have any permission 
issue while trying to create a subdirectory or file under it.
If we want to be very precise, some logging into the mkdirs function could also 
be added as well.


> Add more logging to container-executor about issues with directory creation 
> or permissions
> --
>
> Key: YARN-9717
> URL: https://issues.apache.org/jira/browse/YARN-9717
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
>
> During some downstream testing we bumped into some problems with the 
> container executor where an extra logging would be quite helpful when local 
> files and directories could not be created (container-executor.c:1810).
> The most important log line could be the following:
> There's a function called create_container_directories in 
> container-executor.c.
> We should place a log line like this:
> Before we're calling:
> We have: 
> {code:java}
> if (mkdirs(container_dir, perms) == 0) {
>   result = 0;
> }
> {code}
> We could add an else statement and add the following log, if creating the 
> directory was not successful: 
> {code:java}
> fprintf(LOGFILE, "Failed to create directory: %s, user: %s", container_dir, 
> user);
> {code}
> This way, CE at least prints the directory itself if we have any permission 
> issue while trying to create a subdirectory or file under it.
> If we want to be very precise, some logging into the mkdirs function could 
> also be added as well.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9667) Container-executor.c duplicates messages to stdout

2019-08-01 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16898039#comment-16898039
 ] 

Szilard Nemeth commented on YARN-9667:
--

Hi [~adam.antal]!
As discussed offline with [~pbacsko], we will create a separate jira to cover 
the log message additions as it wants to achieve different things than this 
jira.
Filed YARN-9717

> Container-executor.c duplicates messages to stdout
> --
>
> Key: YARN-9667
> URL: https://issues.apache.org/jira/browse/YARN-9667
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, yarn
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Peter Bacsko
>Priority: Major
>
> When a container is killed by its AM we get a similar error message like this:
> {noformat}
> 2019-06-30 12:09:04,412 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor:
>  Shell execution returned exit code: 143. Privileged Execution Operation 
> Stderr:
> Stdout: main : command provided 1
> main : run as user is systest
> main : requested yarn user is systest
> Getting exit code file...
> Creating script paths...
> Writing pid file...
> Writing to tmp file 
> /yarn/nm/nmPrivate/application_1561921629886_0001/container_e84_1561921629886_0001_01_19/container_e84_1561921629886_0001_01_19.pid.tmp
> Writing to cgroup task files...
> Creating local dirs...
> Launching container...
> Getting exit code file...
> Creating script paths...
> {noformat}
> In container-executor.c the fork point is right after the "Creating script 
> paths..." part, though in the Stdout log we can clearly see it has been 
> written there twice. After consulting with [~pbacsko] it seems like there's a 
> missing flush in container-executor.c before the fork and that causes the 
> duplication.
> I suggest to add a flush there so that it won't be duplicated: it's a bit 
> misleading that the child process writes out "Getting exit code file" and 
> "Creating script paths" even though it is clearly not doing that.
> A more appealing solution could be to revisit the fprintf-fflush pairs in the 
> code and change them to a single call, so that the fflush calls would not be 
> forgotten accidentally. (It can cause problems in every place where it's 
> used).
> Note: this issue probably affects every occasion of fork(), not just the one 
> from {{launch_container_as_user}} in {{main.c}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9717) Add more logging to container-executor about issues with directory creation or permissions

2019-08-01 Thread Szilard Nemeth (JIRA)
Szilard Nemeth created YARN-9717:


 Summary: Add more logging to container-executor about issues with 
directory creation or permissions
 Key: YARN-9717
 URL: https://issues.apache.org/jira/browse/YARN-9717
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Szilard Nemeth
Assignee: Szilard Nemeth






--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1655) Add implementations to FairScheduler to support increase/decrease container resource

2019-08-01 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-1655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897916#comment-16897916
 ] 

Hadoop QA commented on YARN-1655:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 58s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 31s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 11 new + 230 unchanged - 0 fixed = 241 total (was 230) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 47s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}140m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-1655 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976394/YARN-1655.004.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 831729f5d6ba 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 89b102f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/24442/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 

[jira] [Commented] (YARN-1655) Add implementations to FairScheduler to support increase/decrease container resource

2019-08-01 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-1655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16897819#comment-16897819
 ] 

Wilfred Spiegelenburg commented on YARN-1655:
-

Thank you for the feedback [~snemeth], sorry that it took this long.

I have updated the patch and fixed all the remarks.
All except for 4 are straight forward simple changes. To fix 4 I did the 
following:
- make a new {{allocate}} method in the MockRM that takes no arguments and 
calls the real allocate with _nulls_
- updated the calls in the test code to use the new method and added a comment 
to what it does (i.e. process outstanding requests)
- split the other {{allocate}} call in the test code into two steps: a separate 
alloc of the request and a call to {{allocate}} on the app master

That should clear point 4 up.

 [^YARN-1655.004.patch] 

> Add implementations to FairScheduler to support increase/decrease container 
> resource
> 
>
> Key: YARN-1655
> URL: https://issues.apache.org/jira/browse/YARN-1655
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-1655.001.patch, YARN-1655.002.patch, 
> YARN-1655.003.patch, YARN-1655.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-1655) Add implementations to FairScheduler to support increase/decrease container resource

2019-08-01 Thread Wilfred Spiegelenburg (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-1655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wilfred Spiegelenburg updated YARN-1655:

Attachment: YARN-1655.004.patch

> Add implementations to FairScheduler to support increase/decrease container 
> resource
> 
>
> Key: YARN-1655
> URL: https://issues.apache.org/jira/browse/YARN-1655
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, scheduler
>Reporter: Wangda Tan
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-1655.001.patch, YARN-1655.002.patch, 
> YARN-1655.003.patch, YARN-1655.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org