[jira] [Commented] (YARN-9509) Capped cpu usage with cgroup strict-resource-usage based on a mulitplier

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902693#comment-16902693
 ] 

Hadoop QA commented on YARN-9509:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
1s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
34s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 17s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 219 unchanged - 0 fixed = 224 total (was 219) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
4s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 26s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (YARN-9009) Fix flaky test TestEntityGroupFSTimelineStore.testCleanLogs

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902675#comment-16902675
 ] 

Hadoop QA commented on YARN-9009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 12s{color} 
| {color:red} https://github.com/apache/hadoop/pull/438 does not apply to 
trunk. Rebase required? Wrong Branch? See 
https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| GITHUB PR | https://github.com/apache/hadoop/pull/438 |
| JIRA Issue | YARN-9009 |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-438/4/console |
| versions | git=2.7.4 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Fix flaky test TestEntityGroupFSTimelineStore.testCleanLogs
> ---
>
> Key: YARN-9009
> URL: https://issues.apache.org/jira/browse/YARN-9009
> Project: Hadoop YARN
>  Issue Type: Bug
> Environment: Ubuntu 18.04
> java version "1.8.0_181"
> Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
> Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)
>  
> Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 
> 2018-06-17T13:33:14-05:00)
>Reporter: OrDTesters
>Assignee: OrDTesters
>Priority: Minor
> Fix For: 3.0.4, 3.1.2, 3.3.0, 3.2.1
>
> Attachments: YARN-9009-trunk-001.patch
>
>
> In TestEntityGroupFSTimelineStore, testCleanLogs fails when run after 
> testMoveToDone.
> testCleanLogs fails because testMoveToDone moves a file into the same 
> directory that testCleanLogs cleans, causing testCleanLogs to clean 3 files, 
> instead of 2 as testCleanLogs expects.
> To fix the failure of testCleanLogs, we can delete the file after the file is 
> moved by testMoveToDone.
> Pull request link: [https://github.com/apache/hadoop/pull/438]



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-9601) Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations

2019-08-07 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved YARN-9601.
---
   Resolution: Fixed
Fix Version/s: 3.3.0

I reviewed and merged the PR. Close this patch. Thanks [~hunhun]!

> Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations
> 
>
> Key: YARN-9601
> URL: https://issues.apache.org/jira/browse/YARN-9601
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation, yarn
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
> Fix For: 3.3.0
>
>
> Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations
> The code of ZookeeperFederationStateStore#getPoliciesConfigurations
> {code:java}
> for (String child : zkManager.getChildren(policiesZNode)) {
>   SubClusterPolicyConfiguration policy = getPolicy(child);
>   result.add(policy);
> }
> {code}
> The result of `getPolicy` may be null, so policy should be checked 
> The new code 
> {code:java}
> for (String child : zkManager.getChildren(policiesZNode)) {
>   SubClusterPolicyConfiguration policy = getPolicy(child);
>   // policy maybe null, should check
>   if (policy == null) {
> LOG.warn("Policy for queue: {} does not exist.", child);
> continue;
>   }
>   result.add(policy);
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9468) Fix inaccurate documentations in Placement Constraints

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902667#comment-16902667
 ] 

Hadoop QA commented on YARN-9468:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
50s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
28m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-717/4/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/717 |
| JIRA Issue | YARN-9468 |
| Optional Tests | dupname asflicense mvnsite |
| uname | Linux b2ebcede66ac 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 70b4617 |
| Max. process+thread count | 466 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-717/4/console |
| versions | git=2.7.4 maven=3.3.9 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> Fix inaccurate documentations in Placement Constraints
> --
>
> Key: YARN-9468
> URL: https://issues.apache.org/jira/browse/YARN-9468
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.2.0
>Reporter: hunshenshi
>Assignee: hunshenshi
>Priority: Major
>
> Document Placement Constraints
> *First* 
> {code:java}
> zk=3,NOTIN,NODE,zk:hbase=5,IN,RACK,zk:spark=7,CARDINALITY,NODE,hbase,1,3{code}
>  * place 5 containers with tag “hbase” with affinity to a rack on which 
> containers with tag “zk” are running (i.e., an “hbase” container 
> should{color:#ff} not{color} be placed at a rack where an “zk” container 
> is running, given that “zk” is the TargetTag of the second constraint);
> The _*not*_ word in brackets should be delete.
>  
> *Second*
> {code:java}
> PlacementSpec => "" | KeyVal;PlacementSpec
> {code}
> The semicolon should be replaced by colon
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9579) the property of sharedcache in mapred-default.xml

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902650#comment-16902650
 ] 

Hadoop QA commented on YARN-9579:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
21s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-848/5/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/848 |
| JIRA Issue | YARN-9579 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient xml |
| uname | Linux b8b717835a68 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 70b4617 |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-848/5/testReport/ |
| Max. process+thread count | 1695 (vs. ulimit of 5500) |
| modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
| Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-848/5/console |
| versions | git=2.7.4 maven=3.3.9 |
| Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |


This message was automatically generated.



> the property of sharedcache in mapred-default.xml
> -
>
> Key: YARN-9579
> URL: https://issues.apache.org/jira/browse/YARN-9579
> Project: Hadoop YARN

[jira] [Assigned] (YARN-9683) Remove reapDockerContainerNoPid left behind by YARN-9074

2019-08-07 Thread kevin su (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kevin su reassigned YARN-9683:
--

Assignee: kevin su

> Remove reapDockerContainerNoPid left behind by YARN-9074
> 
>
> Key: YARN-9683
> URL: https://issues.apache.org/jira/browse/YARN-9683
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Adam Antal
>Assignee: kevin su
>Priority: Trivial
>  Labels: newbie
>
> YARN-9074 has touched the ContainerCleanup.java but created a separate 
> function instead of using reapDockerContainerNoPid in ContainerCleanup.java.
> Having no usages, that private function can be safely removed.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9601) Potential NPE in ZookeeperFederationStateStore#getPoliciesConfigurations

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902641#comment-16902641
 ] 

Hadoop QA commented on YARN-9601:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m  
7s{color} | {color:blue} Used deprecated FindBugs config; considering switching 
to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
50s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-908/5/artifact/out/Dockerfile
 |
| GITHUB PR | https://github.com/apache/hadoop/pull/908 |
| JIRA Issue | YARN-9601 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle |
| uname | Linux 2672552541af 4.4.0-157-generic #185-Ubuntu SMP Tue Jul 23 
09:17:01 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 70b4617 |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-908/5/testReport/ |
| Max. process+thread count | 412 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Commented] (YARN-7427) NullPointerException in ResourceInfo when queue has not used label

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902521#comment-16902521
 ] 

Hadoop QA commented on YARN-7427:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 57s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 52s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 95m 50s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
48s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}151m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesReservation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.0 Server=19.03.0 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-7427 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12895929/YARN-7427.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux a5931b5287fe 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 11f750e |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/24491/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24491/testReport/ |
| Max. 

[jira] [Commented] (YARN-7427) NullPointerException in ResourceInfo when queue has not used label

2019-08-07 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902454#comment-16902454
 ] 

Jonathan Hung commented on YARN-7427:
-

Nope, no objections. Thanks for following up on this!

> NullPointerException in ResourceInfo when queue has not used label
> --
>
> Key: YARN-7427
> URL: https://issues.apache.org/jira/browse/YARN-7427
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-7427.001.patch, YARN-7427.002.patch
>
>
> {noformat}Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:65)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:96)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
> at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
> at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
> at 
> org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$LI.__(Hamlet.java:7709)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:301)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
> at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
> at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
> at 
> org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$LI.__(Hamlet.java:7709)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:470)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
> at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
> at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
> at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
> at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
> at 
> org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.scheduler(RmController.java:86)
> ... 56 more{noformat}
> For example, configure: {noformat}  
> yarn.scheduler.capacity.root.queues
> default,a
>   
>   
> yarn.scheduler.capacity.root.accessible-node-labels
> x
>   
>   
> yarn.scheduler.capacity.root.default.accessible-node-labels
> x
>   
>   
> 
> yarn.scheduler.capacity.root.default.accessible-node-labels.x.maximum-capacity
> 100
>   
>   
> 
> yarn.scheduler.capacity.root.default.accessible-node-labels.x.capacity
> 100
>   {noformat}
> , then the above exception is thrown when refreshing the scheduler UI 
> (/cluster/scheduler)
> As a result the queue block dropdowns cannot be minimized. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7427) NullPointerException in ResourceInfo when queue has not used label

2019-08-07 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902424#comment-16902424
 ] 

Eric Payne commented on YARN-7427:
--

[~jhung], this appears to be fixed by YARN-9685. Any objections if I dup this 
ticket to that one?

> NullPointerException in ResourceInfo when queue has not used label
> --
>
> Key: YARN-7427
> URL: https://issues.apache.org/jira/browse/YARN-7427
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: YARN-7427.001.patch, YARN-7427.002.patch
>
>
> {noformat}Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.ResourceInfo.toString(ResourceInfo.java:65)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:164)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:96)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
> at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
> at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
> at 
> org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$LI.__(Hamlet.java:7709)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:301)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
> at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
> at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
> at 
> org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$LI.__(Hamlet.java:7709)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:470)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
> at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
> at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
> at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
> at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:71)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
> at 
> org.apache.hadoop.yarn.webapp.Controller.render(Controller.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.RmController.scheduler(RmController.java:86)
> ... 56 more{noformat}
> For example, configure: {noformat}  
> yarn.scheduler.capacity.root.queues
> default,a
>   
>   
> yarn.scheduler.capacity.root.accessible-node-labels
> x
>   
>   
> yarn.scheduler.capacity.root.default.accessible-node-labels
> x
>   
>   
> 
> yarn.scheduler.capacity.root.default.accessible-node-labels.x.maximum-capacity
> 100
>   
>   
> 
> yarn.scheduler.capacity.root.default.accessible-node-labels.x.capacity
> 100
>   {noformat}
> , then the above exception is thrown when refreshing the scheduler UI 
> (/cluster/scheduler)
> As a result the queue block dropdowns cannot be minimized. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9685) NPE when rendering the info table of leaf queue in non-accessible partitions

2019-08-07 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902422#comment-16902422
 ] 

Eric Payne commented on YARN-9685:
--

Thanks a lot, [~Tao Yang], for raising this issue and providing the patch. It 
has been an annoyance to me for a while but I haven't had a chance to address 
it.

+1. I'll commit tomorrow if no one has objections.

> NPE when rendering the info table of leaf queue in non-accessible partitions
> 
>
> Key: YARN-9685
> URL: https://issues.apache.org/jira/browse/YARN-9685
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.3.0
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Major
> Attachments: YARN-9685.001.patch
>
>
> I found incomplete queue info shown on scheduler page and NPE in RM log when 
> rendering the info table of leaf queue in non-accessible partitions.
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:108)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:97)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
> at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
> at org.apache.hadoop.yarn.webapp.View.render(View.java:243)
> {noformat}
> The direct cause is that PartitionQueueCapacitiesInfo of leaf queues in 
> non-accessible partitions are incomplete(part of fields are null such as 
> configuredMinResource/configuredMaxResource/effectiveMinResource/effectiveMaxResource)
>  but some places in CapacitySchedulerPage don't consider that.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9442) container working directory has group read permissions

2019-08-07 Thread Jim Brennan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902405#comment-16902405
 ] 

Jim Brennan commented on YARN-9442:
---

[~ebadger], I've put up a new patch that applies to trunk.

 

> container working directory has group read permissions
> --
>
> Key: YARN-9442
> URL: https://issues.apache.org/jira/browse/YARN-9442
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.2.2
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Attachments: YARN-9442.001.patch, YARN-9442.002.patch, 
> YARN-9442.003.patch
>
>
> Container working directories are currently created with permissions 0750, 
> owned by the user and with the group set to the node manager group.
> Is there any reason why these directories need group read permissions?
> I have been testing with group read permissions removed and so far I haven't 
> encountered any problems.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9442) container working directory has group read permissions

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902378#comment-16902378
 ] 

Hadoop QA commented on YARN-9442:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
8s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 53s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m  
7s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-9442 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976956/YARN-9442.003.patch |
| Optional Tests |  dupname  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 595d7d8b12b5 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 827dbb1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24490/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 5500) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/24490/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> container working directory has group read permissions
> --
>
> Key: YARN-9442
> URL: https://issues.apache.org/jira/browse/YARN-9442
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.2.2
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Attachments: YARN-9442.001.patch, YARN-9442.002.patch, 
> YARN-9442.003.patch
>
>
> Container working directories are currently created with permissions 0750, 
> owned by the user and with the group set to the node manager group.
> Is there any reason why these directories need group read permissions?
> I have been testing with group read permissions 

[jira] [Updated] (YARN-9442) container working directory has group read permissions

2019-08-07 Thread Jim Brennan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated YARN-9442:
--
Attachment: YARN-9442.003.patch

> container working directory has group read permissions
> --
>
> Key: YARN-9442
> URL: https://issues.apache.org/jira/browse/YARN-9442
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.2.2
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Attachments: YARN-9442.001.patch, YARN-9442.002.patch, 
> YARN-9442.003.patch
>
>
> Container working directories are currently created with permissions 0750, 
> owned by the user and with the group set to the node manager group.
> Is there any reason why these directories need group read permissions?
> I have been testing with group read permissions removed and so far I haven't 
> encountered any problems.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9719) Failed to restart yarn-service if it doesn’t exist in RM

2019-08-07 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902199#comment-16902199
 ] 

Eric Yang commented on YARN-9719:
-

[~kyungwan nam] The failed test cases are related to this patch.  Could you 
take a look?  Thanks

> Failed to restart yarn-service if it doesn’t exist in RM
> 
>
> Key: YARN-9719
> URL: https://issues.apache.org/jira/browse/YARN-9719
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: kyungwan nam
>Assignee: kyungwan nam
>Priority: Major
> Attachments: YARN-9719.001.patch, YARN-9719.002.patch, 
> YARN-9719.003.patch
>
>
> Sometimes, restarting a yarn-service is failed as follows.
> {code}
> {"diagnostics":"Application with id 'application_1562735362534_10461' doesn't 
> exist in RM. Please check that the job submission was successful.\n\tat 
> org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:382)\n\tat
>  
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:234)\n\tat
>  
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:561)\n\tat
>  
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)\n\tat
>  org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)\n\tat 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)\n\tat 
> java.security.AccessController.doPrivileged(Native Method)\n\tat 
> javax.security.auth.Subject.doAs(Subject.java:422)\n\tat 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)\n\tat
>  org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)\n"}
> {code}
> It seems like that it occurs when restarting a yarn-service who was stopped 
> long ago.
> by default, RM keeps up to 1000 completed applications 
> (yarn.resourcemanager.max-completed-applications)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9728)  ResourceManager REST API can produce an illegal xml response

2019-08-07 Thread Thomas (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas updated YARN-9728:
-
Description: 
When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0, then
the application fails and the stack trace will be stored into the 
{{diagnostics}} field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.



 *+Examples of illegals characters in xml 1.0 :+* 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}

  was:
When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0.
The application will fail and the stack trace will be stored into the 
{{diagnostics}} field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.



 *+Examples of illegals characters in xml 1.0 :+* 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}


>  ResourceManager REST API can produce an illegal xml response
> -
>
> Key: YARN-9728
> URL: https://issues.apache.org/jira/browse/YARN-9728
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, resourcemanager
>Affects Versions: 2.7.3
>Reporter: Thomas
>Priority: Major
>
> When a spark job throws an exception with a message containing a character 
> out of the range supported by xml 1.0, then
> the application fails and the stack trace will be stored into the 
> {{diagnostics}} field. So far, so good.
> But the issue occurred when we try to get application information with the 
> ResourceManager REST API
> The xml response will contain the illegal xml 1.0 char and will be invalid.
>  *+Examples of illegals characters in xml 1.0 :+* 
>  * \u0001
>  * \u0002
>  * \u0003
>  * \u0004
> _For more information about supported characters :_
> [https://www.w3.org/TR/xml/#charsets]
> *+Example of illegal response from the Ressource Manager API  :+* 
> {code:xml}
> 
> 
>   application_1326821518301_0005
>   user1
>   job
>   a1
>   FINISHED
>   FAILED
>   100.0
>   History
>   
> http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
>   Exception in thread "main" java.lang.Exception: \u0001
>   at com..main(JobWithSpecialCharMain.java:6)
>   [...]
> 
> {code}
>  
> *+Example of job to reproduce :+*
> {code:java}
> public class JobWithSpecialCharMain {
>  public static void main(String[] args) throws Exception {
>   throw new Exception("\u0001");
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9728)  ResourceManager REST API can produce an illegal xml response

2019-08-07 Thread Thomas (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas updated YARN-9728:
-
Description: 
When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0.
The application will fail and the stack trace will be stored into the 
{{diagnostics}} field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.



 *+Examples of illegals characters in xml 1.0 :+* 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}

  was:
When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0.
The application will fail and the stack trace will be stored into the 
"diagnostics" field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.



 *+Examples of illegals characters in xml 1.0 :+* 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}


>  ResourceManager REST API can produce an illegal xml response
> -
>
> Key: YARN-9728
> URL: https://issues.apache.org/jira/browse/YARN-9728
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api, resourcemanager
>Affects Versions: 2.7.3
>Reporter: Thomas
>Priority: Major
>
> When a spark job throws an exception with a message containing a character 
> out of the range supported by xml 1.0.
> The application will fail and the stack trace will be stored into the 
> {{diagnostics}} field. So far, so good.
> But the issue occurred when we try to get application information with the 
> ResourceManager REST API
> The xml response will contain the illegal xml 1.0 char and will be invalid.
>  *+Examples of illegals characters in xml 1.0 :+* 
>  * \u0001
>  * \u0002
>  * \u0003
>  * \u0004
> _For more information about supported characters :_
> [https://www.w3.org/TR/xml/#charsets]
> *+Example of illegal response from the Ressource Manager API  :+* 
> {code:xml}
> 
> 
>   application_1326821518301_0005
>   user1
>   job
>   a1
>   FINISHED
>   FAILED
>   100.0
>   History
>   
> http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
>   Exception in thread "main" java.lang.Exception: \u0001
>   at com..main(JobWithSpecialCharMain.java:6)
>   [...]
> 
> {code}
>  
> *+Example of job to reproduce :+*
> {code:java}
> public class JobWithSpecialCharMain {
>  public static void main(String[] args) throws Exception {
>   throw new Exception("\u0001");
>  }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9728)  ResourceManager REST API can produce an illegal xml response

2019-08-07 Thread Thomas (JIRA)
Thomas created YARN-9728:


 Summary:  ResourceManager REST API can produce an illegal xml 
response
 Key: YARN-9728
 URL: https://issues.apache.org/jira/browse/YARN-9728
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api, resourcemanager
Affects Versions: 2.7.3
Reporter: Thomas


When a spark job throws an exception with a message containing a character out 
of the range supported by xml 1.0.
The application will fail and the stack trace will be stored into the 
"diagnostics" field. So far, so good.

But the issue occurred when we try to get application information with the 
ResourceManager REST API
The xml response will contain the illegal xml 1.0 char and will be invalid.



 *+Examples of illegals characters in xml 1.0 :+* 
 * \u0001
 * \u0002
 * \u0003
 * \u0004

_For more information about supported characters :_
[https://www.w3.org/TR/xml/#charsets]



*+Example of illegal response from the Ressource Manager API  :+* 
{code:xml}


  application_1326821518301_0005
  user1
  job
  a1
  FINISHED
  FAILED
  100.0
  History
  
http://host.domain.com:8088/proxy/application_1326821518301_0005/jobhistory/job/job_1326821518301_5_5
  Exception in thread "main" java.lang.Exception: \u0001
at com..main(JobWithSpecialCharMain.java:6)

  [...]


{code}
 

*+Example of job to reproduce :+*
{code:java}
public class JobWithSpecialCharMain {

 public static void main(String[] args) throws Exception {
  throw new Exception("\u0001");
 }

}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9727) Allowed Origin pattern is discouraged if regex contains *

2019-08-07 Thread Zoltan Siegl (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902153#comment-16902153
 ] 

Zoltan Siegl commented on YARN-9727:


There is a test case for the given problem, where a warning was logged. Now the 
warning is gone, no new test case necessary.

> Allowed Origin pattern is discouraged if regex contains *
> -
>
> Key: YARN-9727
> URL: https://issues.apache.org/jira/browse/YARN-9727
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Minor
> Attachments: YARN-9727.001.patch
>
>
> HADOOP-14908 if allowed-origins regex contains any * characters an 
> incorrectwarning log is triggered: "Allowed Origin pattern 
> 'regex:.*[.]example[.]com' is discouraged, use the 'regex:' prefix and use a 
> Java regular expression instead."
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9727) Allowed Origin pattern is discouraged if regex contains *

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902077#comment-16902077
 ] 

Hadoop QA commented on YARN-9727:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 45s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 12 unchanged - 0 fixed = 13 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
17s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
50s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | YARN-9727 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976929/YARN-9727.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 47c6185e4a0e 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9cd211a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/24489/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24489/testReport/ |
| Max. process+thread count | 1463 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console 

[jira] [Commented] (YARN-9442) container working directory has group read permissions

2019-08-07 Thread Jim Brennan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902072#comment-16902072
 ] 

Jim Brennan commented on YARN-9442:
---

Thanks [~ebadger].  The current patch no longer applies, so I will put up a new 
one (hopefully later today).

> container working directory has group read permissions
> --
>
> Key: YARN-9442
> URL: https://issues.apache.org/jira/browse/YARN-9442
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.2.2
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Minor
> Attachments: YARN-9442.001.patch, YARN-9442.002.patch
>
>
> Container working directories are currently created with permissions 0750, 
> owned by the user and with the group set to the node manager group.
> Is there any reason why these directories need group read permissions?
> I have been testing with group read permissions removed and so far I haven't 
> encountered any problems.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8045) Reduce log output from container status calls

2019-08-07 Thread Shane Kumpf (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902067#comment-16902067
 ] 

Shane Kumpf commented on YARN-8045:
---

+1 on the 2.8 patch. Feel free to go ahead with committing it.

> Reduce log output from container status calls
> -
>
> Key: YARN-8045
> URL: https://issues.apache.org/jira/browse/YARN-8045
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Shane Kumpf
>Assignee: Craig Condit
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.0.4, 2.8.6, 2.9.3, 3.1.3
>
> Attachments: YARN-8045.001-branch-2.8.patch, YARN-8045.001.patch
>
>
> Each time a container's status is returned a log entry is produced in the NM 
> from {{ContainerManagerImpl}}. The container status includes the diagnostics 
> field for the container. If the diagnostics field contains an exception, it 
> can appear as if the exception is logged repeatedly every second. The 
> diagnostics message can also span many lines, which puts pressure on the logs 
> and makes it harder to read.
> For example:
> {code}
> 2018-03-17 22:01:11,632 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Getting container-status for container_e01_1521323860653_0001_01_05
> 2018-03-17 22:01:11,632 INFO 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Returning ContainerStatus: [ContainerId: 
> container_e01_1521323860653_0001_01_05, ExecutionType: GUARANTEED, State: 
> RUNNING, Capability: , Diagnostics: [2018-03-17 
> 22:01:00.675]Exception from container-launch.
> Container id: container_e01_1521323860653_0001_01_05
> Exit code: -1
> Exception message: 
> Shell ouput: 
> [2018-03-17 22:01:00.750]Diagnostic message from attempt :
> [2018-03-17 22:01:00.750]Container exited with a non-zero exit code -1.
> , ExitStatus: -1, IP: null, Host: null, ContainerSubState: SCHEDULED]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9134) No test coverage for redefining FPGA / GPU resource types in TestResourceUtils

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902044#comment-16902044
 ] 

Hadoop QA commented on YARN-9134:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
41s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 36s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 2 
new + 29 unchanged - 0 fixed = 31 total (was 29) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: 
The patch generated 0 new + 3 unchanged - 6 fixed = 3 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
30s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-9134 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976927/YARN-9134.003.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2c510fd42315 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9cd211a |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/24488/artifact/out/diff-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/24488/testReport/ |
| Max. process+thread count | 341 (vs. ulimit of 5500) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 

[jira] [Commented] (YARN-9217) Nodemanager will fail to start if GPU is misconfigured on the node or GPU drivers missing

2019-08-07 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902031#comment-16902031
 ] 

Hadoop QA commented on YARN-9217:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 43s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 8 new + 16 unchanged - 2 fixed = 24 total (was 18) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 12s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 14s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
|  |  Possible null pointer dereference in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer.getFileNameFromFile(File)
 due to return value of called method  Dereferenced at 
GpuDiscoverer.java:org.apache.hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.GpuDiscoverer.getFileNameFromFile(File)
 due to return value of called method  Dereferenced at GpuDiscoverer.java:[line 
349] |
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.linux.resources.gpu.TestGpuResourceHandlerImpl
 |
|   | 
hadoop.yarn.server.nodemanager.containermanager.resourceplugin.gpu.TestGpuDiscoverer
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 |
| JIRA Issue | YARN-9217 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12976925/YARN-9217.006.patch |
| Optional Tests |  dupname  asflicense 

[jira] [Commented] (YARN-9721) An easy method to exclude a nodemanager from the yarn cluster cleanly

2019-08-07 Thread Zac Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902027#comment-16902027
 ] 

Zac Zhou commented on YARN-9721:


Thanks, [~tangzhankun],

I prefer method 1 and 2. 

Method 2 can work the same as method 3, when the time period parameter is set 
to 0.

Method 1 needs to add a member variable in RefreshNodesRequest and RMNode, 
which would involve a bit more work. 

I'm ok with both methods~

> An easy method to exclude a nodemanager from the yarn cluster cleanly
> -
>
> Key: YARN-9721
> URL: https://issues.apache.org/jira/browse/YARN-9721
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zac Zhou
>Priority: Major
> Attachments: decommission nodes.png
>
>
> If we want to take offline a nodemanager server, nodes.exclude-path
>  and "rmadmin -refreshNodes" command are used to decommission the server.
>  But this method cannot clean up the node clearly. Nodemanager servers are 
> still in Decommissioned Nodes as the attachment shows.
>   !decommission nodes.png!
> YARN-4311 enable a removalTimer to clean up the untracked node.
>  But the logic of isUntrackedNode method is to restrict. If include-path is 
> not used, no servers can meet the criteria. Using an include file would make 
> a potential risk in maintenance.
> If yarn cluster is installed on cloud, nodemanager servers are created and 
> deleted frequently. We need a way to exclude a nodemanager from the yarn 
> cluster cleanly. Otherwise, the map of rmContext.getInactiveRMNodes() would 
> keep growing, which would cause a memory issue of RM.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9727) Allowed Origin pattern is discouraged if regex contains *

2019-08-07 Thread Zoltan Siegl (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Siegl updated YARN-9727:
---
Attachment: YARN-9727.001.patch

> Allowed Origin pattern is discouraged if regex contains *
> -
>
> Key: YARN-9727
> URL: https://issues.apache.org/jira/browse/YARN-9727
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Minor
> Attachments: YARN-9727.001.patch
>
>
> HADOOP-14908 if allowed-origins regex contains any * characters an 
> incorrectwarning log is triggered: "Allowed Origin pattern 
> 'regex:.*[.]example[.]com' is discouraged, use the 'regex:' prefix and use a 
> Java regular expression instead."
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Moved] (YARN-9727) Allowed Origin pattern is discouraged if regex contains *

2019-08-07 Thread Zoltan Siegl (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Siegl moved HADOOP-16497 to YARN-9727:
-

Key: YARN-9727  (was: HADOOP-16497)
Project: Hadoop YARN  (was: Hadoop Common)

> Allowed Origin pattern is discouraged if regex contains *
> -
>
> Key: YARN-9727
> URL: https://issues.apache.org/jira/browse/YARN-9727
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zoltan Siegl
>Assignee: Zoltan Siegl
>Priority: Minor
>
> HADOOP-14908 if allowed-origins regex contains any * characters an 
> incorrectwarning log is triggered: "Allowed Origin pattern 
> 'regex:.*[.]example[.]com' is discouraged, use the 'regex:' prefix and use a 
> Java regular expression instead."
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9134) No test coverage for redefining FPGA / GPU resource types in TestResourceUtils

2019-08-07 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9134:
---
Attachment: YARN-9134.003.patch

> No test coverage for redefining FPGA / GPU resource types in TestResourceUtils
> --
>
> Key: YARN-9134
> URL: https://issues.apache.org/jira/browse/YARN-9134
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9134.001.patch, YARN-9134.002.patch, 
> YARN-9134.003.patch
>
>
> The patch also includes some trivial code cleanup.
> Also, setupResourceTypes has been deprecated as it is dangerous to use, see 
> the javadoc for details.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9134) No test coverage for redefining FPGA / GPU resource types in TestResourceUtils

2019-08-07 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16902007#comment-16902007
 ] 

Peter Bacsko commented on YARN-9134:


Rebased the patch.

> No test coverage for redefining FPGA / GPU resource types in TestResourceUtils
> --
>
> Key: YARN-9134
> URL: https://issues.apache.org/jira/browse/YARN-9134
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9134.001.patch, YARN-9134.002.patch, 
> YARN-9134.003.patch
>
>
> The patch also includes some trivial code cleanup.
> Also, setupResourceTypes has been deprecated as it is dangerous to use, see 
> the javadoc for details.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9217) Nodemanager will fail to start if GPU is misconfigured on the node or GPU drivers missing

2019-08-07 Thread Peter Bacsko (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901986#comment-16901986
 ] 

Peter Bacsko commented on YARN-9217:


This patch needed yet another rebase due to renames.

> Nodemanager will fail to start if GPU is misconfigured on the node or GPU 
> drivers missing
> -
>
> Key: YARN-9217
> URL: https://issues.apache.org/jira/browse/YARN-9217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9217.001.patch, YARN-9217.002.patch, 
> YARN-9217.003.patch, YARN-9217.004.patch, YARN-9217.005.patch, 
> YARN-9217.006.patch
>
>
> Nodemanager will not start
> 1. If Autodiscovery is enabled:
>  * If nvidia-smi path is misconfigured or the file does not exist.
>  * There is 0 GPU found
>  * If the file exists but it is not pointing to an nvidia-smi
>  * if the binary is ok but there is an IOException
> 2. If the manually configured GPU devices are misconfigured
>  * Any index:minor number format failure will cause a problem
>  * 0 configured device will cause a problem
>  * NumberFormatException is not handled
> It would be a better option to add warnings about the configuration, set 0 
> available GPUs and let the node work and run non-gpu jobs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9217) Nodemanager will fail to start if GPU is misconfigured on the node or GPU drivers missing

2019-08-07 Thread Peter Bacsko (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko updated YARN-9217:
---
Attachment: YARN-9217.006.patch

> Nodemanager will fail to start if GPU is misconfigured on the node or GPU 
> drivers missing
> -
>
> Key: YARN-9217
> URL: https://issues.apache.org/jira/browse/YARN-9217
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Peter Bacsko
>Priority: Major
> Attachments: YARN-9217.001.patch, YARN-9217.002.patch, 
> YARN-9217.003.patch, YARN-9217.004.patch, YARN-9217.005.patch, 
> YARN-9217.006.patch
>
>
> Nodemanager will not start
> 1. If Autodiscovery is enabled:
>  * If nvidia-smi path is misconfigured or the file does not exist.
>  * There is 0 GPU found
>  * If the file exists but it is not pointing to an nvidia-smi
>  * if the binary is ok but there is an IOException
> 2. If the manually configured GPU devices are misconfigured
>  * Any index:minor number format failure will cause a problem
>  * 0 configured device will cause a problem
>  * NumberFormatException is not handled
> It would be a better option to add warnings about the configuration, set 0 
> available GPUs and let the node work and run non-gpu jobs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9725) [YARN UI2] Running Containers Logs from NM Local Dir are not shown in Applications - Logs Section

2019-08-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9725:

Description: 
[YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
Applications - Logs Section. It shows only the aggregated log files for that 
container and does not show the log files which are present under NM Local Dir. 
YARN UI V1 was showing the log files from NM local dir.

On Analysis found, UI2 calls AHSWebServices /containers/{containerid}/logs 
without nm.id and so AHSWebServices does not fetch from NodeManager 
WebServices, it fetches only from Aggregated App Log Dir.

{color:#14892c}*UI2 Shows Only Aggregated Logs*{color}

!Running_Container_Logs.png|height=200!

{color:#14892c}*NM Local Dir Logs which are not shown*{color}

!NM_Local_Dir.png|height=200!

{color:#14892c}*UI1 Shown local dir logs*{color}

!YARN_UI_V1.png|height=200!

{color:#14892c}*UI2 does not show log for Container_2*{color}

!Running_Container2_UI.png|height=200!

{color:#14892c}*Container_2 has logs under NM Local Dir*{color}

!Running_Container_Log_Dir.png|height=200!





  was:
[YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
Applications - Logs Section. It shows only the aggregated log files for that 
container and does not show the log files which are present under NM Local Dir. 
YARN UI V1 was showing the log files from NM local dir.

{color:#14892c}*UI2 Shows Only Aggregated Logs*{color}

!Running_Container_Logs.png|height=200!

{color:#14892c}*NM Local Dir Logs which are not shown*{color}

!NM_Local_Dir.png|height=200!

{color:#14892c}*UI1 Shown local dir logs*{color}

!YARN_UI_V1.png|height=200!

{color:#14892c}*UI2 does not show log for Container_2*{color}

!Running_Container2_UI.png|height=200!

{color:#14892c}*Container_2 has logs under NM Local Dir*{color}

!Running_Container_Log_Dir.png|height=200!



On Analysis found, UI2 calls AHSWebServices /containers/{containerid}/logs 
without nm.id and so AHSWebServices does not fetch from NodeManager 
WebServices, it fetches only from Aggregated App Log Dir.


> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section
> -
>
> Key: YARN-9725
> URL: https://issues.apache.org/jira/browse/YARN-9725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
> Attachments: NM_Local_Dir.png, Running_Container2_UI.png, 
> Running_Container_Log_Dir.png, Running_Container_Logs.png, YARN_UI_V1.png
>
>
> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section. It shows only the aggregated log files for that 
> container and does not show the log files which are present under NM Local 
> Dir. YARN UI V1 was showing the log files from NM local dir.
> On Analysis found, UI2 calls AHSWebServices /containers/{containerid}/logs 
> without nm.id and so AHSWebServices does not fetch from NodeManager 
> WebServices, it fetches only from Aggregated App Log Dir.
> {color:#14892c}*UI2 Shows Only Aggregated Logs*{color}
> !Running_Container_Logs.png|height=200!
> {color:#14892c}*NM Local Dir Logs which are not shown*{color}
> !NM_Local_Dir.png|height=200!
> {color:#14892c}*UI1 Shown local dir logs*{color}
> !YARN_UI_V1.png|height=200!
> {color:#14892c}*UI2 does not show log for Container_2*{color}
> !Running_Container2_UI.png|height=200!
> {color:#14892c}*Container_2 has logs under NM Local Dir*{color}
> !Running_Container_Log_Dir.png|height=200!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9726) [YARN UI2] Implement server side pagination for logs in application, service and nodes pages

2019-08-07 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9726:
---
Description: 
Currently in application and service logs pages, logs are shown using client 
side pagging. This would make page to render slow if log size is very high. 
Similarly there is no logs pagging implemented in nodes page.

Solution: Show last 4 bytes of logs initially. Then show full logs based on 
user input.

  was:Currently in application and service logs pages, logs are shown using 
client side pagging. This would make page to render slow if log size is very 
high. Similarly there is no logs pagging implemented in nodes page.


> [YARN UI2] Implement server side pagination for logs in application, service 
> and nodes pages
> 
>
> Key: YARN-9726
> URL: https://issues.apache.org/jira/browse/YARN-9726
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
>
> Currently in application and service logs pages, logs are shown using client 
> side pagging. This would make page to render slow if log size is very high. 
> Similarly there is no logs pagging implemented in nodes page.
> Solution: Show last 4 bytes of logs initially. Then show full logs based on 
> user input.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9726) [YARN UI2] Implement server side pagination for logs in application, service and nodes pages

2019-08-07 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9726:
---
Summary: [YARN UI2] Implement server side pagination for logs in 
application, service and nodes pages  (was: [YARN UI2] Implement server side 
pagination for logs in application/service and nodes pages)

> [YARN UI2] Implement server side pagination for logs in application, service 
> and nodes pages
> 
>
> Key: YARN-9726
> URL: https://issues.apache.org/jira/browse/YARN-9726
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
>
> Currently in application and service logs pages, logs are shown using client 
> side pagging. This would make page to render slow if log size is very high. 
> Similarly there is no logs pagging implemented in nodes page.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9726) [YARN UI2] Implement server side pagging in logs in application/service and nodes pages

2019-08-07 Thread Akhil PB (JIRA)
Akhil PB created YARN-9726:
--

 Summary: [YARN UI2] Implement server side pagging in logs in 
application/service and nodes pages
 Key: YARN-9726
 URL: https://issues.apache.org/jira/browse/YARN-9726
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Akhil PB
Assignee: Akhil PB


Currently in application and service logs pages, logs are shown using client 
side pagging. This would make page to render slow if log size is very high. 
Similarly there is no logs pagging implemented in nodes page.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9726) [YARN UI2] Implement server side pagination for logs in application/service and nodes pages

2019-08-07 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-9726:
---
Summary: [YARN UI2] Implement server side pagination for logs in 
application/service and nodes pages  (was: [YARN UI2] Implement server side 
pagging in logs in application/service and nodes pages)

> [YARN UI2] Implement server side pagination for logs in application/service 
> and nodes pages
> ---
>
> Key: YARN-9726
> URL: https://issues.apache.org/jira/browse/YARN-9726
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
>
> Currently in application and service logs pages, logs are shown using client 
> side pagging. This would make page to render slow if log size is very high. 
> Similarly there is no logs pagging implemented in nodes page.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9725) [YARN UI2] Running Containers Logs from NM Local Dir are not shown in Applications - Logs Section

2019-08-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9725:

Description: 
[YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
Applications - Logs Section. It shows only the aggregated log files for that 
container and does not show the log files which are present under NM Local Dir. 
YARN UI V1 was showing the log files from NM local dir.

{color:#14892c}*UI2 Shows Only Aggregated Logs*{color}

!Running_Container_Logs.png|height=200!

{color:#14892c}*NM Local Dir Logs which are not shown*{color}

!NM_Local_Dir.png|height=200!

{color:#14892c}*UI1 Shown local dir logs*{color}

!YARN_UI_V1.png|height=200!

{color:#14892c}*UI2 does not show log for Container_2*{color}

!Running_Container2_UI.png|height=200!

{color:#14892c}*Container_2 has logs under NM Local Dir*{color}

!Running_Container_Log_Dir.png|height=200!

  was:
[YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
Applications - Logs Section. It shows only the aggregated log files for that 
container and does not show the log files which are present under NM Local Dir. 
YARN UI V1 was showing the log files from NM local dir.

*UI2 Shows Only Aggregated Logs*

!Running_Container_Logs.png|height=200!

*NM Local Dir Logs which are not shown*

!NM_Local_Dir.png|height=200!

*UI1 Shown local dir logs*

!YARN_UI_V1.png|height=200!

*UI2 does not show log for Container_2*

!Running_Container2_UI.png|height=200!

*Container_2 has logs under NM Local Dir*

!Running_Container_Log_Dir.png|height=200!


> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section
> -
>
> Key: YARN-9725
> URL: https://issues.apache.org/jira/browse/YARN-9725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
> Attachments: NM_Local_Dir.png, Running_Container2_UI.png, 
> Running_Container_Log_Dir.png, Running_Container_Logs.png, YARN_UI_V1.png
>
>
> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section. It shows only the aggregated log files for that 
> container and does not show the log files which are present under NM Local 
> Dir. YARN UI V1 was showing the log files from NM local dir.
> {color:#14892c}*UI2 Shows Only Aggregated Logs*{color}
> !Running_Container_Logs.png|height=200!
> {color:#14892c}*NM Local Dir Logs which are not shown*{color}
> !NM_Local_Dir.png|height=200!
> {color:#14892c}*UI1 Shown local dir logs*{color}
> !YARN_UI_V1.png|height=200!
> {color:#14892c}*UI2 does not show log for Container_2*{color}
> !Running_Container2_UI.png|height=200!
> {color:#14892c}*Container_2 has logs under NM Local Dir*{color}
> !Running_Container_Log_Dir.png|height=200!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9725) [YARN UI2] Running Containers Logs from NM Local Dir are not shown in Applications - Logs Section

2019-08-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9725:

Summary: [YARN UI2] Running Containers Logs from NM Local Dir are not shown 
in Applications - Logs Section  (was: [YARN UI2] Running Containers Logs are 
not shown in Applications - Logs Section)

> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section
> -
>
> Key: YARN-9725
> URL: https://issues.apache.org/jira/browse/YARN-9725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
> Attachments: NM_Local_Dir.png, Running_Container2_UI.png, 
> Running_Container_Log_Dir.png, Running_Container_Logs.png, YARN_UI_V1.png
>
>
> [YARN UI2] Running Containers Logs are not shown in Applications - Logs 
> Section. It shows only the aggregated log files for that container and does 
> not show the log files which are present under NM Local Dir. YARN UI V1 was 
> showing the log files from NM local dir.
> *UI2 Shows Only Aggregated Logs*
>  !Running_Container_Logs.png|height=200|width=300! 
> *NM Local Dir Logs which are not shown*
>  !NM_Local_Dir.png|height=200|width=300! 
> *UI1 Shown local dir logs*
>  !YARN_UI_V1.png|height=200|width=300! 
> *UI2 does not show log for Container_2*
>  !Running_Container2_UI.png|height=200|width=300! 
> *Container_2 has logs under NM Local Dir*
>  !Running_Container_Log_Dir.png|height=200|width=300! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9725) [YARN UI2] Running Containers Logs are not shown in Applications - Logs Section

2019-08-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9725:

Description: 
[YARN UI2] Running Containers Logs are not shown in Applications - Logs 
Section. It shows only the aggregated log files for that container and does not 
show the log files which are present under NM Local Dir. YARN UI V1 was showing 
the log files from NM local dir.

*UI2 Shows Only Aggregated Logs*

 !Running_Container_Logs.png|height=200|width=300! 

*NM Local Dir Logs which are not shown*

 !NM_Local_Dir.png|height=200|width=300! 

*UI1 Shown local dir logs*

 !YARN_UI_V1.png|height=200|width=300! 

*UI2 does not show log for Container_2*

 !Running_Container2_UI.png|height=200|width=300! 

*Container_2 has logs under NM Local Dir*

 !Running_Container_Log_Dir.png|height=200|width=300! 







  was:
[YARN UI2] Running Containers Logs are not shown in Applications - Logs 
Section. It shows only the aggregated log files for that container and does not 
show the log files which are present under NM Local Dir. YARN UI V1 was showing 
the log files from NM local dir.






> [YARN UI2] Running Containers Logs are not shown in Applications - Logs 
> Section
> ---
>
> Key: YARN-9725
> URL: https://issues.apache.org/jira/browse/YARN-9725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
> Attachments: NM_Local_Dir.png, Running_Container2_UI.png, 
> Running_Container_Log_Dir.png, Running_Container_Logs.png, YARN_UI_V1.png
>
>
> [YARN UI2] Running Containers Logs are not shown in Applications - Logs 
> Section. It shows only the aggregated log files for that container and does 
> not show the log files which are present under NM Local Dir. YARN UI V1 was 
> showing the log files from NM local dir.
> *UI2 Shows Only Aggregated Logs*
>  !Running_Container_Logs.png|height=200|width=300! 
> *NM Local Dir Logs which are not shown*
>  !NM_Local_Dir.png|height=200|width=300! 
> *UI1 Shown local dir logs*
>  !YARN_UI_V1.png|height=200|width=300! 
> *UI2 does not show log for Container_2*
>  !Running_Container2_UI.png|height=200|width=300! 
> *Container_2 has logs under NM Local Dir*
>  !Running_Container_Log_Dir.png|height=200|width=300! 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9725) [YARN UI2] Running Containers Logs from NM Local Dir are not shown in Applications - Logs Section

2019-08-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9725:

Description: 
[YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
Applications - Logs Section. It shows only the aggregated log files for that 
container and does not show the log files which are present under NM Local Dir. 
YARN UI V1 was showing the log files from NM local dir.

*UI2 Shows Only Aggregated Logs*

!Running_Container_Logs.png|height=200!

*NM Local Dir Logs which are not shown*

!NM_Local_Dir.png|height=200!

*UI1 Shown local dir logs*

!YARN_UI_V1.png|height=200!

*UI2 does not show log for Container_2*

!Running_Container2_UI.png|height=200!

*Container_2 has logs under NM Local Dir*

!Running_Container_Log_Dir.png|height=200!

  was:
[YARN UI2] Running Containers Logs are not shown in Applications - Logs 
Section. It shows only the aggregated log files for that container and does not 
show the log files which are present under NM Local Dir. YARN UI V1 was showing 
the log files from NM local dir.

*UI2 Shows Only Aggregated Logs*

 !Running_Container_Logs.png|height=200|width=300! 

*NM Local Dir Logs which are not shown*

 !NM_Local_Dir.png|height=200|width=300! 

*UI1 Shown local dir logs*

 !YARN_UI_V1.png|height=200|width=300! 

*UI2 does not show log for Container_2*

 !Running_Container2_UI.png|height=200|width=300! 

*Container_2 has logs under NM Local Dir*

 !Running_Container_Log_Dir.png|height=200|width=300! 








> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section
> -
>
> Key: YARN-9725
> URL: https://issues.apache.org/jira/browse/YARN-9725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
> Attachments: NM_Local_Dir.png, Running_Container2_UI.png, 
> Running_Container_Log_Dir.png, Running_Container_Logs.png, YARN_UI_V1.png
>
>
> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section. It shows only the aggregated log files for that 
> container and does not show the log files which are present under NM Local 
> Dir. YARN UI V1 was showing the log files from NM local dir.
> *UI2 Shows Only Aggregated Logs*
> !Running_Container_Logs.png|height=200!
> *NM Local Dir Logs which are not shown*
> !NM_Local_Dir.png|height=200!
> *UI1 Shown local dir logs*
> !YARN_UI_V1.png|height=200!
> *UI2 does not show log for Container_2*
> !Running_Container2_UI.png|height=200!
> *Container_2 has logs under NM Local Dir*
> !Running_Container_Log_Dir.png|height=200!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9725) [YARN UI2] Running Containers Logs are not shown in Applications - Logs Section

2019-08-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9725:

Attachment: Running_Container2_UI.png
Running_Container_Log_Dir.png

> [YARN UI2] Running Containers Logs are not shown in Applications - Logs 
> Section
> ---
>
> Key: YARN-9725
> URL: https://issues.apache.org/jira/browse/YARN-9725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
> Attachments: NM_Local_Dir.png, Running_Container2_UI.png, 
> Running_Container_Log_Dir.png, Running_Container_Logs.png, YARN_UI_V1.png
>
>
> [YARN UI2] Running Containers Logs are not shown in Applications - Logs 
> Section. It shows only the aggregated log files for that container and does 
> not show the log files which are present under NM Local Dir. YARN UI V1 was 
> showing the log files from NM local dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9725) [YARN UI2] Running Containers Logs are not shown in Applications - Logs Section

2019-08-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9725:

Attachment: Running_Container_Logs.png
NM_Local_Dir.png
YARN_UI_V1.png

> [YARN UI2] Running Containers Logs are not shown in Applications - Logs 
> Section
> ---
>
> Key: YARN-9725
> URL: https://issues.apache.org/jira/browse/YARN-9725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
> Attachments: NM_Local_Dir.png, Running_Container2_UI.png, 
> Running_Container_Log_Dir.png, Running_Container_Logs.png, YARN_UI_V1.png
>
>
> [YARN UI2] Running Containers Logs are not shown in Applications - Logs 
> Section. It shows only the aggregated log files for that container and does 
> not show the log files which are present under NM Local Dir. YARN UI V1 was 
> showing the log files from NM local dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-9725) [YARN UI2] Running Containers Logs are not shown in Applications - Logs Section

2019-08-07 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created YARN-9725:
---

 Summary: [YARN UI2] Running Containers Logs are not shown in 
Applications - Logs Section
 Key: YARN-9725
 URL: https://issues.apache.org/jira/browse/YARN-9725
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn-ui-v2
Affects Versions: 3.2.0
Reporter: Prabhu Joseph


[YARN UI2] Running Containers Logs are not shown in Applications - Logs 
Section. It shows only the aggregated log files for that container and does not 
show the log files which are present under NM Local Dir. YARN UI V1 was showing 
the log files from NM local dir.







--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5857) TestLogAggregationService.testFixedSizeThreadPool fails intermittently on trunk

2019-08-07 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901915#comment-16901915
 ] 

Adam Antal commented on YARN-5857:
--

Agree on the idea [~ajithshetty] has written previously. Would you like to 
upload a patch for it?

> TestLogAggregationService.testFixedSizeThreadPool fails intermittently on 
> trunk
> ---
>
> Key: YARN-5857
> URL: https://issues.apache.org/jira/browse/YARN-5857
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Ajith S
>Priority: Minor
> Attachments: testFixedSizeThreadPool failure reproduction
>
>
> {noformat}
> testFixedSizeThreadPool(org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService)
>   Time elapsed: 0.11 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService.testFixedSizeThreadPool(TestLogAggregationService.java:1139)
> {noformat}
> Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13829/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5857) TestLogAggregationService.testFixedSizeThreadPool fails intermittently on trunk

2019-08-07 Thread Adam Antal (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901898#comment-16901898
 ] 

Adam Antal commented on YARN-5857:
--

This test still fails (look last patch build on YARN-9559).

I was also able to reproduce this on my local. Attached  
[^testFixedSizeThreadPool failure reproduction] with the reproduction.

> TestLogAggregationService.testFixedSizeThreadPool fails intermittently on 
> trunk
> ---
>
> Key: YARN-5857
> URL: https://issues.apache.org/jira/browse/YARN-5857
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Ajith S
>Priority: Minor
> Attachments: testFixedSizeThreadPool failure reproduction
>
>
> {noformat}
> testFixedSizeThreadPool(org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService)
>   Time elapsed: 0.11 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService.testFixedSizeThreadPool(TestLogAggregationService.java:1139)
> {noformat}
> Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13829/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5857) TestLogAggregationService.testFixedSizeThreadPool fails intermittently on trunk

2019-08-07 Thread Adam Antal (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-5857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Antal updated YARN-5857:
-
Attachment: testFixedSizeThreadPool failure reproduction

> TestLogAggregationService.testFixedSizeThreadPool fails intermittently on 
> trunk
> ---
>
> Key: YARN-5857
> URL: https://issues.apache.org/jira/browse/YARN-5857
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Ajith S
>Priority: Minor
> Attachments: testFixedSizeThreadPool failure reproduction
>
>
> {noformat}
> testFixedSizeThreadPool(org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService)
>   Time elapsed: 0.11 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService.testFixedSizeThreadPool(TestLogAggregationService.java:1139)
> {noformat}
> Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13829/testReport/



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-9725) [YARN UI2] Running Containers Logs from NM Local Dir are not shown in Applications - Logs Section

2019-08-07 Thread Prabhu Joseph (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-9725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-9725:

Description: 
[YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
Applications - Logs Section. It shows only the aggregated log files for that 
container and does not show the log files which are present under NM Local Dir. 
YARN UI V1 was showing the log files from NM local dir.

{color:#14892c}*UI2 Shows Only Aggregated Logs*{color}

!Running_Container_Logs.png|height=200!

{color:#14892c}*NM Local Dir Logs which are not shown*{color}

!NM_Local_Dir.png|height=200!

{color:#14892c}*UI1 Shown local dir logs*{color}

!YARN_UI_V1.png|height=200!

{color:#14892c}*UI2 does not show log for Container_2*{color}

!Running_Container2_UI.png|height=200!

{color:#14892c}*Container_2 has logs under NM Local Dir*{color}

!Running_Container_Log_Dir.png|height=200!



On Analysis found, UI2 calls AHSWebServices /containers/{containerid}/logs 
without nm.id and so AHSWebServices does not fetch from NodeManager 
WebServices, it fetches only from Aggregated App Log Dir.

  was:
[YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
Applications - Logs Section. It shows only the aggregated log files for that 
container and does not show the log files which are present under NM Local Dir. 
YARN UI V1 was showing the log files from NM local dir.

{color:#14892c}*UI2 Shows Only Aggregated Logs*{color}

!Running_Container_Logs.png|height=200!

{color:#14892c}*NM Local Dir Logs which are not shown*{color}

!NM_Local_Dir.png|height=200!

{color:#14892c}*UI1 Shown local dir logs*{color}

!YARN_UI_V1.png|height=200!

{color:#14892c}*UI2 does not show log for Container_2*{color}

!Running_Container2_UI.png|height=200!

{color:#14892c}*Container_2 has logs under NM Local Dir*{color}

!Running_Container_Log_Dir.png|height=200!


> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section
> -
>
> Key: YARN-9725
> URL: https://issues.apache.org/jira/browse/YARN-9725
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Affects Versions: 3.2.0
>Reporter: Prabhu Joseph
>Priority: Major
> Attachments: NM_Local_Dir.png, Running_Container2_UI.png, 
> Running_Container_Log_Dir.png, Running_Container_Logs.png, YARN_UI_V1.png
>
>
> [YARN UI2] Running Containers Logs from NM Local Dir are not shown in 
> Applications - Logs Section. It shows only the aggregated log files for that 
> container and does not show the log files which are present under NM Local 
> Dir. YARN UI V1 was showing the log files from NM local dir.
> {color:#14892c}*UI2 Shows Only Aggregated Logs*{color}
> !Running_Container_Logs.png|height=200!
> {color:#14892c}*NM Local Dir Logs which are not shown*{color}
> !NM_Local_Dir.png|height=200!
> {color:#14892c}*UI1 Shown local dir logs*{color}
> !YARN_UI_V1.png|height=200!
> {color:#14892c}*UI2 does not show log for Container_2*{color}
> !Running_Container2_UI.png|height=200!
> {color:#14892c}*Container_2 has logs under NM Local Dir*{color}
> !Running_Container_Log_Dir.png|height=200!
> On Analysis found, UI2 calls AHSWebServices /containers/{containerid}/logs 
> without nm.id and so AHSWebServices does not fetch from NodeManager 
> WebServices, it fetches only from Aggregated App Log Dir.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-9724) ERROR SparkContext: Error initializing SparkContext.

2019-08-07 Thread panlijie (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901801#comment-16901801
 ] 

panlijie commented on YARN-9724:


@[~ste...@apache.org] I confirm that in 
org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getClusterMetrics()
 Code is not implemented in 3.1.0, thank you!

> ERROR SparkContext: Error initializing SparkContext.
> 
>
> Key: YARN-9724
> URL: https://issues.apache.org/jira/browse/YARN-9724
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: federation, router, yarn
>Affects Versions: 3.0.0, 3.1.0
> Environment: Hadoop:3.1.0
> Spark:2.3.3
>Reporter: panlijie
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: spark.log
>
>
> we have some problemes about hadoop-yarn-federation when we use  spark on 
> yarn-federation
> The flowing Error find :
> org.apache.commons.lang.NotImplementedException: Code is not implemented
> at 
> org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor.getClusterMetrics(FederationClientInterceptor.java:573)
>  at 
> org.apache.hadoop.yarn.server.router.clientrm.RouterClientRMService.getClusterMetrics(RouterClientRMService.java:230)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getClusterMetrics(ApplicationClientProtocolPBServiceImpl.java:248)
>  at 
> org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:569)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
>  at 
> org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:107)
>  at 
> org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:209)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
>  at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  at com.sun.proxy.$Proxy16.getClusterMetrics(Unknown Source)
>  at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:487)
>  at 
> org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
>  at 
> org.apache.spark.deploy.yarn.Client$$anonfun$submitApplication$1.apply(Client.scala:155)
>  at org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
>  at org.apache.spark.deploy.yarn.Client.logInfo(Client.scala:59)
>  at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:154)
>  at 
> org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:57)
>  at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:164)
>  at org.apache.spark.SparkContext.(SparkContext.scala:500)
>  at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2493)
>  at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:934)
>  at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:925)
>  at scala.Option.getOrElse(Option.scala:121)
>  at 
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:925)
>  at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31)
>  at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> 

[jira] [Commented] (YARN-9667) Container-executor.c duplicates messages to stdout

2019-08-07 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-9667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16901762#comment-16901762
 ] 

Szilard Nemeth commented on YARN-9667:
--

Thanks [~eyang]!

> Container-executor.c duplicates messages to stdout
> --
>
> Key: YARN-9667
> URL: https://issues.apache.org/jira/browse/YARN-9667
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, yarn
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Peter Bacsko
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-9667-001.patch
>
>
> When a container is killed by its AM we get a similar error message like this:
> {noformat}
> 2019-06-30 12:09:04,412 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor:
>  Shell execution returned exit code: 143. Privileged Execution Operation 
> Stderr:
> Stdout: main : command provided 1
> main : run as user is systest
> main : requested yarn user is systest
> Getting exit code file...
> Creating script paths...
> Writing pid file...
> Writing to tmp file 
> /yarn/nm/nmPrivate/application_1561921629886_0001/container_e84_1561921629886_0001_01_19/container_e84_1561921629886_0001_01_19.pid.tmp
> Writing to cgroup task files...
> Creating local dirs...
> Launching container...
> Getting exit code file...
> Creating script paths...
> {noformat}
> In container-executor.c the fork point is right after the "Creating script 
> paths..." part, though in the Stdout log we can clearly see it has been 
> written there twice. After consulting with [~pbacsko] it seems like there's a 
> missing flush in container-executor.c before the fork and that causes the 
> duplication.
> I suggest to add a flush there so that it won't be duplicated: it's a bit 
> misleading that the child process writes out "Getting exit code file" and 
> "Creating script paths" even though it is clearly not doing that.
> A more appealing solution could be to revisit the fprintf-fflush pairs in the 
> code and change them to a single call, so that the fflush calls would not be 
> forgotten accidentally. (It can cause problems in every place where it's 
> used).
> Note: this issue probably affects every occasion of fork(), not just the one 
> from {{launch_container_as_user}} in {{main.c}}.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org