[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647465#comment-16647465
 ] 

Rohith Sharma K S commented on YARN-5742:
-

We may need to rebase the patch based on branch-2. I think lets not merge to 
branch-2 now. 

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xun Liu updated YARN-8870:
--
Attachment: YARN-8870.003.patch

> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: YARN-8870.001.patch, YARN-8870.002.patch, 
> YARN-8870.003.patch
>
>
> In order to reduce the deployment difficulty of Hadoop
> {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel 
> modification and other components, I specially developed this installation 
> script to deploy Hadoop \{Submarine}
> runtime environment, providing one-click installation Scripts, which can also 
> be used to install, uninstall, start, and stop individual components step by 
> step.
>  
> {color:#ff}design d{color}{color:#FF}ocument:{color} 
> [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647455#comment-16647455
 ] 

Vrushali C commented on YARN-5742:
--

Okay sounds good .. I tried with the same patch for branch-2 but the patch does 
not apply as is to branch-2.  Getting the following compilation errors. 

{code}
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:33 min
[INFO] Finished at: 2018-10-11T21:28:15-07:00
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hadoop-yarn-server-common: Compilation failure: Compilation failure:
[ERROR] 
hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogWebService.java:[34,30]
 cannot find symbol
[ERROR]   symbol:   class JettyUtils
[ERROR]   location: package org.apache.hadoop.http
[ERROR] 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogWebService.java:[338,45]
 cannot find symbol
[ERROR]   symbol:   variable JettyUtils
[ERROR]   location: class org.apache.hadoop.yarn.server.webapp.LogWebService
[ERROR] 

hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogWebService.java:[473,16]
 cannot find symbol
[ERROR]   symbol:   method getStatusInfo()
[ERROR]   location: variable resp of type 
com.sun.jersey.api.client.ClientResponse
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-yarn-server-common
{code}


> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8834) Provide Java client for fetching Yarn specific entities from TimelineReader

2018-10-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647452#comment-16647452
 ] 

Hudson commented on YARN-8834:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15186 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15186/])
YARN-8834 Provide Java client for fetching Yarn specific entities from 
(vrushali: rev a3edfddcf7822ea13bdf4858672eb82cea5e0b5f)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineReaderClientImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestTimelineReaderClientImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/TimelineReaderClient.java


> Provide Java client for fetching Yarn specific entities from TimelineReader
> ---
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-8834.001.patch, YARN-8834.002.patch, 
> YARN-8834.003.patch, YARN-8834.004.patch, YARN-8834.005.patch, 
> YARN-8834.006.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide application or container ids along with filters.Currently 
> fetching entities from TimelineReader is only via REST call or somebody need 
> to write java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-10-11 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647425#comment-16647425
 ] 

Vrushali C commented on YARN-3879:
--

Committed to branch-2 as part of

https://github.com/apache/hadoop/commit/7ed627af6b3503e2b5446e582c83678218996d72

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355, YARN-7055
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.3, 3.1.2
>
> Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch, 
> YARN-3879.002.patch, YARN-3879.003.patch, YARN-3879.004.patch, 
> YARN-3879.005.patch, YARN-3879.006.patch
>
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-10-11 Thread Vrushali C (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-3879:
-
Fix Version/s: 2.9.2
   2.10.0

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355, YARN-7055
> Fix For: 2.10.0, 3.2.0, 2.9.2, 3.0.3, 3.1.2
>
> Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch, 
> YARN-3879.002.patch, YARN-3879.003.patch, YARN-3879.004.patch, 
> YARN-3879.005.patch, YARN-3879.006.patch
>
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-10-11 Thread Vrushali C (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-3879:
-
Labels: YARN-5355 YARN-7055  (was: YARN-5355)

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355, YARN-7055
> Fix For: 3.2.0, 3.0.3, 3.1.2
>
> Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch, 
> YARN-3879.002.patch, YARN-3879.003.patch, YARN-3879.004.patch, 
> YARN-3879.005.patch, YARN-3879.006.patch
>
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8834) Provide Java client for fetching Yarn specific entities from TimelineReader

2018-10-11 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647415#comment-16647415
 ] 

Rohith Sharma K S commented on YARN-8834:
-

bq. Do we need some documentation examples for this? 
This JIRA focus on YARN entities which are consumed by yarn client. So, we do 
not need any doc update for this now. Once we expose generic entity API's, we 
need to update doc. 

> Provide Java client for fetching Yarn specific entities from TimelineReader
> ---
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-8834.001.patch, YARN-8834.002.patch, 
> YARN-8834.003.patch, YARN-8834.004.patch, YARN-8834.005.patch, 
> YARN-8834.006.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide application or container ids along with filters.Currently 
> fetching entities from TimelineReader is only via REST call or somebody need 
> to write java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8834) Provide Java client for fetching Yarn specific entities from TimelineReader

2018-10-11 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647369#comment-16647369
 ] 

Vrushali C edited comment on YARN-8834 at 10/12/18 4:17 AM:


Committed to trunk as part of 
https://github.com/apache/hadoop/commit/a3edfddcf7822ea13bdf4858672eb82cea5e0b5f

Thanks [~abmodi] for the patch and [~rohithsharma] for the reviews! 

Do we need some documentation examples for this? 


was (Author: vrushalic):
Committed to trunk as part of 
https://github.com/apache/hadoop/commit/a3edfddcf7822ea13bdf4858672eb82cea5e0b5f

Do we need some documentation examples for this? 

> Provide Java client for fetching Yarn specific entities from TimelineReader
> ---
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-8834.001.patch, YARN-8834.002.patch, 
> YARN-8834.003.patch, YARN-8834.004.patch, YARN-8834.005.patch, 
> YARN-8834.006.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide application or container ids along with filters.Currently 
> fetching entities from TimelineReader is only via REST call or somebody need 
> to write java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8871) Document behavior of YARN-5742

2018-10-11 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8871:

Summary: Document behavior of YARN-5742  (was: Documentation behavior of 
YARN-5742)

> Document behavior of YARN-5742
> --
>
> Key: YARN-8871
> URL: https://issues.apache.org/jira/browse/YARN-8871
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Priority: Major
>
> YARN-5742 allows for serving aggregated logs of historical apps from timeline 
> service v2. Need the documentation updates for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-10-11 Thread Vrushali C (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-3879:
-
Fix Version/s: 3.1.2
   3.0.3
   3.2.0

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
> Fix For: 3.2.0, 3.0.3, 3.1.2
>
> Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch, 
> YARN-3879.002.patch, YARN-3879.003.patch, YARN-3879.004.patch, 
> YARN-3879.005.patch, YARN-3879.006.patch
>
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8871) Documentation behavior of YARN-5742

2018-10-11 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8871:

Summary: Documentation behavior of YARN-5742  (was: Documentation updates 
for YARN-5742 Serve aggregated logs of historical apps from timeline service)

> Documentation behavior of YARN-5742
> ---
>
> Key: YARN-8871
> URL: https://issues.apache.org/jira/browse/YARN-8871
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Vrushali C
>Priority: Major
>
> YARN-5742 allows for serving aggregated logs of historical apps from timeline 
> service v2. Need the documentation updates for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8871) Documentation behavior of YARN-5742

2018-10-11 Thread Rohith Sharma K S (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-8871:

Issue Type: Sub-task  (was: Task)
Parent: YARN-7055

> Documentation behavior of YARN-5742
> ---
>
> Key: YARN-8871
> URL: https://issues.apache.org/jira/browse/YARN-8871
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Priority: Major
>
> YARN-5742 allows for serving aggregated logs of historical apps from timeline 
> service v2. Need the documentation updates for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8864) NM incorrectly logs container user as the user who sent a stop container request in its audit log

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647411#comment-16647411
 ] 

Hadoop QA commented on YARN-8864:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 1 new + 215 unchanged - 5 fixed = 216 total (was 220) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m  
1s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8864 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943581/YARN-8864.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2e224d7f60dd 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fb18cc5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22164/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22164/testReport/ |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647410#comment-16647410
 ] 

Rohith Sharma K S commented on YARN-5742:
-

bq. Should this go to branch-2 too? 
IIRC, Bunch of ATSv2 JIRAs are not gone into branch-2. We may need to revisit 
YARN-7055 subtasks to port into branch-2. Lets leave it as of now..

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647409#comment-16647409
 ] 

Rohith Sharma K S commented on YARN-5742:
-

Thanks [~vrushalic] [~abmodi] and [~sunilg] for the review and committing the 
patch.

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-10-11 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647395#comment-16647395
 ] 

Vrushali C commented on YARN-3879:
--

Patch 006 LGTM
Committing shortly 

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch, 
> YARN-3879.002.patch, YARN-3879.003.patch, YARN-3879.004.patch, 
> YARN-3879.005.patch, YARN-3879.006.patch
>
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8834) Provide Java client for fetching Yarn specific entities from TimelineReader

2018-10-11 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647392#comment-16647392
 ] 

Vrushali C commented on YARN-8834:
--

The patch does not apply to branch-2, looks like.

{code}
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineReaderClientImpl.java:[227,13]
 cannot find symbol
[ERROR]   symbol:   method getStatusInfo()
[ERROR]   location: variable resp of type 
com.sun.jersey.api.client.ClientResponse
[ERROR]
{code}


> Provide Java client for fetching Yarn specific entities from TimelineReader
> ---
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-8834.001.patch, YARN-8834.002.patch, 
> YARN-8834.003.patch, YARN-8834.004.patch, YARN-8834.005.patch, 
> YARN-8834.006.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide application or container ids along with filters.Currently 
> fetching entities from TimelineReader is only via REST call or somebody need 
> to write java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647377#comment-16647377
 ] 

Hadoop QA commented on YARN-8872:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} 
hadoop-mapreduce-project/hadoop-mapreduce-client: The patch generated 6 new + 
179 unchanged - 6 fixed = 185 total (was 185) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
45s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
29s{color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core
 |
|  |  Inconsistent synchronization of 
org.apache.hadoop.mapreduce.counters.FileSystemCounterGroup.map; locked 46% of 
time  Unsynchronized access at FileSystemCounterGroup.java:46% of time  
Unsynchronized access at FileSystemCounterGroup.java:[line 281] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | 

[jira] [Updated] (YARN-8834) Provide Java client for fetching Yarn specific entities from TimelineReader

2018-10-11 Thread Vrushali C (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-8834:
-
Fix Version/s: (was: 3.0.4)

> Provide Java client for fetching Yarn specific entities from TimelineReader
> ---
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-8834.001.patch, YARN-8834.002.patch, 
> YARN-8834.003.patch, YARN-8834.004.patch, YARN-8834.005.patch, 
> YARN-8834.006.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide application or container ids along with filters.Currently 
> fetching entities from TimelineReader is only via REST call or somebody need 
> to write java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8834) Provide Java client for fetching Yarn specific entities from TimelineReader

2018-10-11 Thread Vrushali C (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-8834:
-
Hadoop Flags: Reviewed

> Provide Java client for fetching Yarn specific entities from TimelineReader
> ---
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.0.4, 3.1.2, 3.3.0
>
> Attachments: YARN-8834.001.patch, YARN-8834.002.patch, 
> YARN-8834.003.patch, YARN-8834.004.patch, YARN-8834.005.patch, 
> YARN-8834.006.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide application or container ids along with filters.Currently 
> fetching entities from TimelineReader is only via REST call or somebody need 
> to write java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8834) Provide Java client for fetching Yarn specific entities from TimelineReader

2018-10-11 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647369#comment-16647369
 ] 

Vrushali C commented on YARN-8834:
--

Committed to trunk as part of 
https://github.com/apache/hadoop/commit/a3edfddcf7822ea13bdf4858672eb82cea5e0b5f

Do we need some documentation examples for this? 

> Provide Java client for fetching Yarn specific entities from TimelineReader
> ---
>
> Key: YARN-8834
> URL: https://issues.apache.org/jira/browse/YARN-8834
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Critical
> Attachments: YARN-8834.001.patch, YARN-8834.002.patch, 
> YARN-8834.003.patch, YARN-8834.004.patch, YARN-8834.005.patch, 
> YARN-8834.006.patch
>
>
> While reviewing YARN-8303, we felt that it is necessary to provide 
> TimelineReaderClient which wraps all the REST calls in it so that user can 
> just provide application or container ids along with filters.Currently 
> fetching entities from TimelineReader is only via REST call or somebody need 
> to write java client get entities.
> It is good to provide TimelineReaderClient which fetch entities from 
> TimelineReaderServer. This will be more useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647361#comment-16647361
 ] 

Hadoop QA commented on YARN-8870:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
2s{color} | {color:red} The patch generated 273 new + 0 unchanged - 0 fixed = 
273 total (was 0) {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 15s{color} | {color:orange} The patch generated 114 new + 106 unchanged - 0 
fixed = 220 total (was 106) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 41s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 8 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8870 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943580/YARN-8870.002.patch |
| Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 6e4598b56993 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 604c27c |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-YARN-Build/22162/artifact/out/diff-patch-shellcheck.txt
 |
| shelldocs | 
https://builds.apache.org/job/PreCommit-YARN-Build/22162/artifact/out/diff-patch-shelldocs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22162/artifact/out/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22162/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/22162/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 336 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine 
U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22162/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: 

[jira] [Commented] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication

2018-10-11 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647307#comment-16647307
 ] 

Sunil Govindan commented on YARN-8869:
--

Thanks [~eyang]. If there are no objections, i ll get this in today.

> YARN Service Client might not work correctly with RM REST API for Kerberos 
> authentication
> -
>
> Key: YARN-8869
> URL: https://issues.apache.org/jira/browse/YARN-8869
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-8869.001.patch, YARN-8869.002.patch
>
>
> ApiServiceClient uses WebResource instead of Builder to pass Kerberos 
> authorization header.  This may not work sometimes, and it is because 
> WebResource.header() could be bind mashed new Builder instance in some 
> condition.  This article explained the details: 
> https://juristr.com/blog/2015/05/jersey-webresource-ignores-headers/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xun Liu updated YARN-8870:
--
Attachment: YARN-8870.002.patch

> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: YARN-8870.001.patch, YARN-8870.002.patch
>
>
> In order to reduce the deployment difficulty of Hadoop
> {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel 
> modification and other components, I specially developed this installation 
> script to deploy Hadoop \{Submarine}
> runtime environment, providing one-click installation Scripts, which can also 
> be used to install, uninstall, start, and stop individual components step by 
> step.
>  
> {color:#ff}design d{color}{color:#FF}ocument:{color} 
> [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647293#comment-16647293
 ] 

Hadoop QA commented on YARN-8869:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-api:
 The patch generated 9 new + 4 unchanged - 0 fixed = 13 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 39s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8869 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943567/YARN-8869.002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8d70e875bb11 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 604c27c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/22161/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-api.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22161/testReport/ |
| Max. process+thread count | 547 (vs. ulimit of 1) |
| modules | C: 

[jira] [Updated] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory

2018-10-11 Thread Misha Dmitriev (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated YARN-8872:
-
Attachment: YARN-8872.01.patch

> Optimize collections used by Yarn JHS to reduce its memory
> --
>
> Key: YARN-8872
> URL: https://issues.apache.org/jira/browse/YARN-8872
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: YARN-8872.01.patch, jhs-bad-collections.png
>
>
> We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big 
> heap in a large clusters, handling large MapReduce jobs. The heap is large 
> (over 32GB) and 21.4% of it is wasted due to various suboptimal Java 
> collections, mostly maps and lists that are either empty or contain only one 
> element. In such under-populated collections considerable amount of memory is 
> still used by just the internal implementation objects. See the attached 
> excerpt from the jxray report for the details. If certain collections are 
> almost always empty, they should be initialized lazily. If others almost 
> always have just 1 or 2 elements, they should be initialized with the 
> appropriate initial capacity of 1 or 2 (the default capacity is 16 for 
> HashMap and 10 for ArrayList).
> Based on the attached report, we should do the following:
>  # {{FileSystemCounterGroup.map}} - initialize lazily
>  # {{CompletedTask.attempts}} - initialize with  capacity 2, given most tasks 
> only have one or two attempts
>  # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity
>  # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it 
> contains one diagnostic message most of the time
>  # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to 
> use the more wasteful LinkedList here) and initialize with capacity 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647290#comment-16647290
 ] 

Sunil Govindan commented on YARN-5742:
--

I pulled to branch-3.2 as well. It earlier went only to trunk/branch-3.1 and 
branch-3.0. 

Thanks.

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-5742:
-
Fix Version/s: 3.3.0

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory

2018-10-11 Thread Misha Dmitriev (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated YARN-8872:
-
Description: 
We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big 
heap in a large clusters, handling large MapReduce jobs. The heap is large 
(over 32GB) and 21.4% of it is wasted due to various suboptimal Java 
collections, mostly maps and lists that are either empty or contain only one 
element. In such under-populated collections considerable amount of memory is 
still used by just the internal implementation objects. See the attached 
excerpt from the jxray report for the details. If certain collections are 
almost always empty, they should be initialized lazily. If others almost always 
have just 1 or 2 elements, they should be initialized with the appropriate 
initial capacity of 1 or 2 (the default capacity is 16 for HashMap and 10 for 
ArrayList).

Based on the attached report, we should do the following:
 # {{FileSystemCounterGroup.map}} - initialize lazily
 # {{CompletedTask.attempts}} - initialize with  capacity 2, given most tasks 
only have one or two attempts
 # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity
 # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it 
contains one diagnostic message most of the time
 # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to use 
the more wasteful LinkedList here) and initialize with capacity 1.

  was:
We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big 
heap in a large clusters, handling large MapReduce jobs. The heap is large 
(over 32GB) and 21.4% of it is wasted due to various suboptimal Java 
collections, mostly maps and lists that are either empty or contain only one 
element. In such under-populated collections considerable amount of memory is 
still used by just the internal implementation objects. See the attached 
excerpt from the jxray report for the details. If certain collections are 
almost always empty, they should be initialized lazily. If others almost always 
have just 1 or 2 elements, they should be initialized with the appropriate 
initial capacity, which is much smaller than e.g. the default 16 for HashMap 
and 10 for ArrayList.

Based on the attached report, we should do the following:
 # {{FileSystemCounterGroup.map}} - initialize lazily
 # {{CompletedTask.attempts}} - initialize with  capacity 2, given most tasks 
only have one or two attempts
 # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity
 # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it 
contains one diagnostic message most of the time
 # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to use 
the more wasteful LinkedList here) and initialize with capacity 1.


> Optimize collections used by Yarn JHS to reduce its memory
> --
>
> Key: YARN-8872
> URL: https://issues.apache.org/jira/browse/YARN-8872
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: jhs-bad-collections.png
>
>
> We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big 
> heap in a large clusters, handling large MapReduce jobs. The heap is large 
> (over 32GB) and 21.4% of it is wasted due to various suboptimal Java 
> collections, mostly maps and lists that are either empty or contain only one 
> element. In such under-populated collections considerable amount of memory is 
> still used by just the internal implementation objects. See the attached 
> excerpt from the jxray report for the details. If certain collections are 
> almost always empty, they should be initialized lazily. If others almost 
> always have just 1 or 2 elements, they should be initialized with the 
> appropriate initial capacity of 1 or 2 (the default capacity is 16 for 
> HashMap and 10 for ArrayList).
> Based on the attached report, we should do the following:
>  # {{FileSystemCounterGroup.map}} - initialize lazily
>  # {{CompletedTask.attempts}} - initialize with  capacity 2, given most tasks 
> only have one or two attempts
>  # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity
>  # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it 
> contains one diagnostic message most of the time
>  # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to 
> use the more wasteful LinkedList here) and initialize with capacity 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Updated] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory

2018-10-11 Thread Misha Dmitriev (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated YARN-8872:
-
Description: 
We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big 
heap in a large clusters, handling large MapReduce jobs. The heap is large 
(over 32GB) and 21.4% of it is wasted due to various suboptimal Java 
collections, mostly maps and lists that are either empty or contain only one 
element. In such under-populated collections considerable amount of memory is 
still used by just the internal implementation objects. See the attached 
excerpt from the jxray report for the details. If certain collections are 
almost always empty, they should be initialized lazily. If others almost always 
have just 1 or 2 elements, they should be initialized with the appropriate 
initial capacity, which is much smaller than e.g. the default 16 for HashMap 
and 10 for ArrayList.

Based on the attached report, we should do the following:
 # {{FileSystemCounterGroup.map}} - initialize lazily
 # {{CompletedTask.attempts}} - initialize with  capacity 2, given most tasks 
only have one or two attempts
 # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity
 # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it 
contains one diagnostic message most of the time
 # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to use 
the more wasteful LinkedList here) and initialize with capacity 1.

  was:
We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big 
heap in a large clusters, handling large MapReduce jobs. The heap is large 
(over 32GB) and 21.4% of it is wasted due to various suboptimal Java 
collections, mostly maps and lists that are either empty or contain only one 
element. In such under-populated collections considerable amount of memory is 
still used by just the internal implementation objects. See the attached 
excerpt from the jxray report for the details. If certain collections are 
almost always empty, they should be initialized lazily. If others almost always 
have just 1 or 2 elements, they should be initialized with the appropriate 
initial capacity, which is much smaller than e.g. the default 16 for HashMap 
and 10 for ArrayList.

Based on the attached report, we should do the following:
 # {{FileSystemCounterGroup.map}} - initialize lazily
 # {{CompletedTask.attempts}} - initialize with  capacity 2, given most tasks 
only have one or two attempts
 # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity 2

 # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it 
contains one diagnostic message most of the time.
 # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to use 
the more wasteful LinkedList here) and initialize with capacity 1.


> Optimize collections used by Yarn JHS to reduce its memory
> --
>
> Key: YARN-8872
> URL: https://issues.apache.org/jira/browse/YARN-8872
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>Priority: Major
> Attachments: jhs-bad-collections.png
>
>
> We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big 
> heap in a large clusters, handling large MapReduce jobs. The heap is large 
> (over 32GB) and 21.4% of it is wasted due to various suboptimal Java 
> collections, mostly maps and lists that are either empty or contain only one 
> element. In such under-populated collections considerable amount of memory is 
> still used by just the internal implementation objects. See the attached 
> excerpt from the jxray report for the details. If certain collections are 
> almost always empty, they should be initialized lazily. If others almost 
> always have just 1 or 2 elements, they should be initialized with the 
> appropriate initial capacity, which is much smaller than e.g. the default 16 
> for HashMap and 10 for ArrayList.
> Based on the attached report, we should do the following:
>  # {{FileSystemCounterGroup.map}} - initialize lazily
>  # {{CompletedTask.attempts}} - initialize with  capacity 2, given most tasks 
> only have one or two attempts
>  # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity
>  # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it 
> contains one diagnostic message most of the time
>  # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to 
> use the more wasteful LinkedList here) and initialize with capacity 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Created] (YARN-8872) Optimize collections used by Yarn JHS to reduce its memory

2018-10-11 Thread Misha Dmitriev (JIRA)
Misha Dmitriev created YARN-8872:


 Summary: Optimize collections used by Yarn JHS to reduce its memory
 Key: YARN-8872
 URL: https://issues.apache.org/jira/browse/YARN-8872
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: yarn
Reporter: Misha Dmitriev
Assignee: Misha Dmitriev
 Attachments: jhs-bad-collections.png

We analyzed, using jxray (www.jxray.com) a heap dump of JHS running with big 
heap in a large clusters, handling large MapReduce jobs. The heap is large 
(over 32GB) and 21.4% of it is wasted due to various suboptimal Java 
collections, mostly maps and lists that are either empty or contain only one 
element. In such under-populated collections considerable amount of memory is 
still used by just the internal implementation objects. See the attached 
excerpt from the jxray report for the details. If certain collections are 
almost always empty, they should be initialized lazily. If others almost always 
have just 1 or 2 elements, they should be initialized with the appropriate 
initial capacity, which is much smaller than e.g. the default 16 for HashMap 
and 10 for ArrayList.

Based on the attached report, we should do the following:
 # {{FileSystemCounterGroup.map}} - initialize lazily
 # {{CompletedTask.attempts}} - initialize with  capacity 2, given most tasks 
only have one or two attempts
 # {{JobHistoryParser$TaskInfo.attemptsMap}} - initialize with capacity 2

 # {{CompletedTaskAttempt.diagnostics}} - initialize with capacity 1 since it 
contains one diagnostic message most of the time.
 # {{CompletedTask.reportDiagnostics}} - switch to ArrayList (no reason to use 
the more wasteful LinkedList here) and initialize with capacity 1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647236#comment-16647236
 ] 

Hudson commented on YARN-5742:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15183 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15183/])
YARN-5742 Serve aggregated logs of historical apps from timeline (vrushali: rev 
8d1981806feb8278966c02a9eff42d72541bb35e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/AHSWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/YarnWebServiceParams.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogWebServiceUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/TimelineReaderServer.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/webapp/LogWebService.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/webapp/TestLogWebService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/webapp/TestAHSWebServices.java


> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2
>
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8569) Create an interface to provide cluster information to application

2018-10-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647030#comment-16647030
 ] 

Eric Yang edited comment on YARN-8569 at 10/12/18 12:43 AM:


[~leftnoteasy] . Patch 12 includes set_user fix, and added sysfs path name to 
nmPrivate/[appId]/sysfs.  In the current implementation, the sync will make n 
rest api call to node managers where n is less than total number of bare metal 
host that hosting containers.  This reduced required network traffic to keep 
information in sync.  With the introduced sysfs prefix in nmPrivate directory, 
it will pave ways to add more than just app.json to sysfs directory and prevent 
path traversal attack.  

When information is populated into other files, then there is higher chance of 
race condition, where state is changed in some files but not all files.  
Multiple files population mechanism will require more thoughts to keep the 
information transactional.  The first version does not add container id, or 
arbitrary filename support to reduce the transaction commits and ensure the 
information propagation is idempotent from container point of view.


was (Author: eyang):
[~leftnoteasy] . Patch 12 includes set_user fix, and added sysfs path name to 
nmPrivate/[appId]/sysfs.  In the current implementation, the sync will make n 
rest api call to node managers where n is less than total number of bare metal 
host that hosting containers.  This reduced required network traffic to keep 
information in sync.  With the introduced sysfs prefix in nmPrivate directory, 
it will pave ways to add more than just app.json to sysfs directory and prevent 
path traversal attack.  

When information is populated into other files, then there is higher chance of 
race condition, where state is changed in some files but not all files.  
Multiple files population mechanism will require more thoughts to keep the 
information transactional.  The first version does not add add container id, or 
arbitrary filename support to reduce the transaction commits and ensure the 
information propagation is idempotent from container point of view.

> Create an interface to provide cluster information to application
> -
>
> Key: YARN-8569
> URL: https://issues.apache.org/jira/browse/YARN-8569
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8569 YARN sysfs interface to provide cluster 
> information to application.pdf, YARN-8569.001.patch, YARN-8569.002.patch, 
> YARN-8569.003.patch, YARN-8569.004.patch, YARN-8569.005.patch, 
> YARN-8569.006.patch, YARN-8569.007.patch, YARN-8569.008.patch, 
> YARN-8569.009.patch, YARN-8569.010.patch, YARN-8569.011.patch, 
> YARN-8569.012.patch
>
>
> Some program requires container hostnames to be known for application to run. 
>  For example, distributed tensorflow requires launch_command that looks like:
> {code}
> # On ps0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=0
> # On ps1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=1
> # On worker0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=0
> # On worker1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=1
> {code}
> This is a bit cumbersome to orchestrate via Distributed Shell, or YARN 
> services launch_command.  In addition, the dynamic parameters do not work 
> with YARN flex command.  This is the classic pain point for application 
> developer attempt to automate system environment settings as parameter to end 
> user application.
> It would be great if YARN Docker integration can provide a simple option to 
> expose hostnames of the yarn service via a mounted file.  The file content 
> gets updated when flex command is performed.  This allows application 
> developer to consume system environment settings via a standard interface.  
> It is like /proc/devices for Linux, but for Hadoop.  This may involve 
> updating a file in distributed cache, and allow mounting of the file via 
> container-executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (YARN-8448) AM HTTPS Support

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647227#comment-16647227
 ] 

Hadoop QA commented on YARN-8448:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m  4s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
9s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 17m 52s{color} | 
{color:red} root generated 1 new + 9 unchanged - 0 fixed = 10 total (was 9) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
52s{color} | {color:green} root generated 0 new + 1317 unchanged - 10 fixed = 
1317 total (was 1327) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 48s{color} | {color:orange} root: The patch generated 9 new + 595 unchanged 
- 8 fixed = 604 total (was 603) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 29s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
49s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 34s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-yarn-server-web-proxy in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 53s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
41s{color} | {color:red} The 

[jira] [Updated] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication

2018-10-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8869:

Attachment: YARN-8869.002.patch

> YARN Service Client might not work correctly with RM REST API for Kerberos 
> authentication
> -
>
> Key: YARN-8869
> URL: https://issues.apache.org/jira/browse/YARN-8869
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-8869.001.patch, YARN-8869.002.patch
>
>
> ApiServiceClient uses WebResource instead of Builder to pass Kerberos 
> authorization header.  This may not work sometimes, and it is because 
> WebResource.header() could be bind mashed new Builder instance in some 
> condition.  This article explained the details: 
> https://juristr.com/blog/2015/05/jersey-webresource-ignores-headers/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication

2018-10-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647228#comment-16647228
 ] 

Eric Yang commented on YARN-8869:
-

Patch 002 added the test case, and refactor the code to be testable for unit 
test.

> YARN Service Client might not work correctly with RM REST API for Kerberos 
> authentication
> -
>
> Key: YARN-8869
> URL: https://issues.apache.org/jira/browse/YARN-8869
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-8869.001.patch, YARN-8869.002.patch
>
>
> ApiServiceClient uses WebResource instead of Builder to pass Kerberos 
> authorization header.  This may not work sometimes, and it is because 
> WebResource.header() could be bind mashed new Builder instance in some 
> condition.  This article explained the details: 
> https://juristr.com/blog/2015/05/jersey-webresource-ignores-headers/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication

2018-10-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8869:

Attachment: (was: YARN-8869.001.patch)

> YARN Service Client might not work correctly with RM REST API for Kerberos 
> authentication
> -
>
> Key: YARN-8869
> URL: https://issues.apache.org/jira/browse/YARN-8869
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-8869.001.patch
>
>
> ApiServiceClient uses WebResource instead of Builder to pass Kerberos 
> authorization header.  This may not work sometimes, and it is because 
> WebResource.header() could be bind mashed new Builder instance in some 
> condition.  This article explained the details: 
> https://juristr.com/blog/2015/05/jersey-webresource-ignores-headers/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication

2018-10-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8869:

Attachment: YARN-8869.001.patch

> YARN Service Client might not work correctly with RM REST API for Kerberos 
> authentication
> -
>
> Key: YARN-8869
> URL: https://issues.apache.org/jira/browse/YARN-8869
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-8869.001.patch
>
>
> ApiServiceClient uses WebResource instead of Builder to pass Kerberos 
> authorization header.  This may not work sometimes, and it is because 
> WebResource.header() could be bind mashed new Builder instance in some 
> condition.  This article explained the details: 
> https://juristr.com/blog/2015/05/jersey-webresource-ignores-headers/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8569) Create an interface to provide cluster information to application

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647222#comment-16647222
 ] 

Hadoop QA commented on YARN-8569:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 44s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 146 unchanged - 1 fixed = 146 total (was 147) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m  
5s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
6s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} 

[jira] [Commented] (YARN-8448) AM HTTPS Support

2018-10-11 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647209#comment-16647209
 ] 

Haibo Chen commented on YARN-8448:
--

Thanks [~rkanter] for the patch update! Posting some of my comments while I am 
still finishing up looking at the c code changes and the ProxyCA code.

1) The YARN-specific secret keys should probably be moved to yarn-modules 
(hadoop-yarn-server-common seems a good place), instead of being added to 
hadoop-common.

2) In KeyStoreTestUtil.bytesToKeyStore(), should we use try clause for the 
inputstream?

3) In YarnConfiguration and yarn-default.xml,  can we rephrase the comments of 
the new configuration? "Sets the policy the RM should use when enforcing HTTPS 
...". => "Specifies what RM does to enforce HTTPS..."

For 'LENIEN', RM would always generate the key/trust store regardless of what 
URL AM sends to RM, if the policy is LENIENT or STRICT. In fact, that happens 
before AM is even launched. It is probably more accurate to say something along 
the lines of "RM will generate and provide to AMs a keystore and truststore , 
which AMs are free to use to set up HTTPs in their tracking web server. The RM 
webproxy would always connect users to AMs even they use HTTP"

Similarly for 'STRICT', "RM will always generate and provide a keystore and 
truststore for AMs. AMs are free to use the keystore and truststore to set up 
HTTPs in their tracking web server. However, RM webproxy would block users from 
accessing any AM web server that runs in HTTP."

4)  How about we move "KEYSTORE_FILE_LOCATION", "KEYSTORE_PASSWORD", 
TRUSTSTORE_FILE_LOCATION and TRUSTSTORE_PASSWORD to ApplicationConstants?

5) In DefaultLinuxContainerRuntime and DockerLinuxContainerRuntime, can we do 
null-checking for both keystore and truststore to be more defensive?

6)   testLaunchContainerCopyFiles(boolean https) has a lot of if-statements 
which I think justified having two different methods, each calling some utility 
methods. Can you try to break it into two? Likewise for  
testContainerLaunch(boolean https).

 


 

> AM HTTPS Support
> 
>
> Key: YARN-8448
> URL: https://issues.apache.org/jira/browse/YARN-8448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8448.001.patch, YARN-8448.002.patch, 
> YARN-8448.003.patch, YARN-8448.004.patch, YARN-8448.005.patch, 
> YARN-8448.006.patch, YARN-8448.007.patch, YARN-8448.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8448) AM HTTPS Support

2018-10-11 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647210#comment-16647210
 ] 

Haibo Chen commented on YARN-8448:
--

I'll continue the review tomorrow and post remaining comments, if any.

> AM HTTPS Support
> 
>
> Key: YARN-8448
> URL: https://issues.apache.org/jira/browse/YARN-8448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8448.001.patch, YARN-8448.002.patch, 
> YARN-8448.003.patch, YARN-8448.004.patch, YARN-8448.005.patch, 
> YARN-8448.006.patch, YARN-8448.007.patch, YARN-8448.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8842) Update QueueMetrics with custom resource values

2018-10-11 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647197#comment-16647197
 ] 

Wilfred Spiegelenburg commented on YARN-8842:
-

Thank you for the updated patch.
I am OK with the patch as it is now v9, +1 non binding

There are two minor checkstyle issues which could be fixed on checkin:
* Class Queue should be declared as final. [FinalClass]
* Name 'conf' must match pattern '^[A-Z][A-Z0-9]*(_[A-Z0-9]+)*$'. [ConstantName]

> Update QueueMetrics with custom resource values 
> 
>
> Key: YARN-8842
> URL: https://issues.apache.org/jira/browse/YARN-8842
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8842.001.patch, YARN-8842.002.patch, 
> YARN-8842.003.patch, YARN-8842.004.patch, YARN-8842.005.patch, 
> YARN-8842.006.patch, YARN-8842.007.patch, YARN-8842.008.patch, 
> YARN-8842.009.patch
>
>
> This is the 2nd dependent jira of YARN-8059.
> As updating the metrics is an independent step from handling preemption, this 
> jira only deals with the queue metrics update of custom resources.
> The following metrics should be updated: 
> * allocated resources
> * available resources
> * pending resources
> * reserved resources
> * aggregate seconds preempted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Vrushali C (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5742:
-
Fix Version/s: 3.1.2
   3.0.3
   3.2.0

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2
>
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647176#comment-16647176
 ] 

Vrushali C edited comment on YARN-5742 at 10/11/18 11:27 PM:
-

Committed to trunk as part of 
https://github.com/apache/hadoop/commit/8d1981806feb8278966c02a9eff42d72541bb35e

Should this go to branch-2 too? 
[~rohithsharma] do you want to check the fix versions that I have set too


was (Author: vrushalic):
Committed to trunk as part of 
https://github.com/apache/hadoop/commit/8d1981806feb8278966c02a9eff42d72541bb35e

Should this go to branch-2 too? 
[~rohithsharma] do you want to check the release versions that I have set too

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2
>
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647176#comment-16647176
 ] 

Vrushali C commented on YARN-5742:
--

Committed to trunk as part of 
https://github.com/apache/hadoop/commit/8d1981806feb8278966c02a9eff42d72541bb35e

Should this go to branch-2 too? 
[~rohithsharma] do you want to check the release versions that I have set too

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Fix For: 3.2.0, 3.0.3, 3.1.2
>
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8871) Documentation updates for YARN-5742 Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Vrushali C (JIRA)
Vrushali C created YARN-8871:


 Summary: Documentation updates for YARN-5742 Serve aggregated logs 
of historical apps from timeline service
 Key: YARN-8871
 URL: https://issues.apache.org/jira/browse/YARN-8871
 Project: Hadoop YARN
  Issue Type: Task
Reporter: Vrushali C



YARN-5742 allows for serving aggregated logs of historical apps from timeline 
service v2. Need the documentation updates for that. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8448) AM HTTPS Support

2018-10-11 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647168#comment-16647168
 ] 

Robert Kanter commented on YARN-8448:
-

I see, thanks for the details [~jlowe] - it's been a while since I've done much 
C coding.  I'll be sure to fix this in the next update to the patch.

> AM HTTPS Support
> 
>
> Key: YARN-8448
> URL: https://issues.apache.org/jira/browse/YARN-8448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8448.001.patch, YARN-8448.002.patch, 
> YARN-8448.003.patch, YARN-8448.004.patch, YARN-8448.005.patch, 
> YARN-8448.006.patch, YARN-8448.007.patch, YARN-8448.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Vrushali C (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647166#comment-16647166
 ] 

Vrushali C commented on YARN-5742:
--

+ 1 , committing shortly

We discussed this today in the weekly call for ats. Will file a jira for 
documentation updates. 

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5742-POC-v0.patch, YARN-5742.01.patch, 
> YARN-5742.02.patch, YARN-5742.03.patch, YARN-5742.04.patch, YARN-5742.v0.patch
>
>
> ATSv1.5 daemon has servlet to serve aggregated logs. But enabling only ATSv2, 
> does not serve logs from CLI and UI for completed application. Log serving 
> story has completely broken in ATSv2.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8842) Update QueueMetrics with custom resource values

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647160#comment-16647160
 ] 

Hadoop QA commented on YARN-8842:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 31s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 15 new + 185 unchanged - 2 fixed = 200 total (was 187) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
21s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 49s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 11s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8842 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943495/YARN-8842.009.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux afb0a586ff68 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 2addebb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (YARN-8448) AM HTTPS Support

2018-10-11 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647131#comment-16647131
 ] 

Jason Lowe commented on YARN-8448:
--

bq. The cc warning isn't a problem.

IMHO the warning should be fixed.  If it's a function argument and not intended 
to be a format string then the code should be calling fwrite instead of 
fprintf.  That should fix the warning and prevent a potential crash if someone 
ever accidentally passes an argument in the future that contains a format 
directive thinking the contents will not try to be interpreted.


> AM HTTPS Support
> 
>
> Key: YARN-8448
> URL: https://issues.apache.org/jira/browse/YARN-8448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8448.001.patch, YARN-8448.002.patch, 
> YARN-8448.003.patch, YARN-8448.004.patch, YARN-8448.005.patch, 
> YARN-8448.006.patch, YARN-8448.007.patch, YARN-8448.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8582) Documentation for AM HTTPS Support

2018-10-11 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647100#comment-16647100
 ] 

Robert Kanter commented on YARN-8582:
-

The 002 patch:
 * Rebased on the latest trunk
 * Renamed OFF, OPTIONAL, REQUIRED to NONE, LENIENT, and STRICT as per YARN-8448

> Documentation for AM HTTPS Support
> --
>
> Key: YARN-8582
> URL: https://issues.apache.org/jira/browse/YARN-8582
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: docs
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8582.001.patch, YARN-8582.002.patch
>
>
> Documentation for YARN-6586.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8582) Documentation for AM HTTPS Support

2018-10-11 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-8582:

Attachment: YARN-8582.002.patch

> Documentation for AM HTTPS Support
> --
>
> Key: YARN-8582
> URL: https://issues.apache.org/jira/browse/YARN-8582
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: docs
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8582.001.patch, YARN-8582.002.patch
>
>
> Documentation for YARN-6586.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3854) Add localization support for docker images

2018-10-11 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647081#comment-16647081
 ] 

Eric Badger commented on YARN-3854:
---

The proposal in pdf 002 looks good to me

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
> Attachments: Localization Support For Docker Images.pdf, Localization 
> Support For Docker Images_002.pdf, YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v3.pdf
>
>
> We need the ability to localize docker images when those images aren't 
> already available locally. There are various approaches that could be used 
> here with different trade-offs/issues : image archives on HDFS + docker load 
> ,  docker pull during the localization phase or (automatic) docker pull 
> during the run/launch phase. 
> We also need the ability to clean-up old/stale, unused images. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8869) YARN Service Client might not work correctly with RM REST API for Kerberos authentication

2018-10-11 Thread Vinod Kumar Vavilapalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated YARN-8869:
--
Issue Type: Bug  (was: Improvement)
   Summary: YARN Service Client might not work correctly with RM REST API 
for Kerberos authentication  (was: YARN client might not work correctly with RM 
Rest API for Kerberos authentication)

> YARN Service Client might not work correctly with RM REST API for Kerberos 
> authentication
> -
>
> Key: YARN-8869
> URL: https://issues.apache.org/jira/browse/YARN-8869
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.2.0, 3.1.1
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-8869.001.patch
>
>
> ApiServiceClient uses WebResource instead of Builder to pass Kerberos 
> authorization header.  This may not work sometimes, and it is because 
> WebResource.header() could be bind mashed new Builder instance in some 
> condition.  This article explained the details: 
> https://juristr.com/blog/2015/05/jersey-webresource-ignores-headers/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8569) Create an interface to provide cluster information to application

2018-10-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16647030#comment-16647030
 ] 

Eric Yang commented on YARN-8569:
-

[~leftnoteasy] . Patch 12 includes set_user fix, and added sysfs path name to 
nmPrivate/[appId]/sysfs.  In the current implementation, the sync will make n 
rest api call to node managers where n is less than total number of bare metal 
host that hosting containers.  This reduced required network traffic to keep 
information in sync.  With the introduced sysfs prefix in nmPrivate directory, 
it will pave ways to add more than just app.json to sysfs directory and prevent 
path traversal attack.  

When information is populated into other files, then there is higher chance of 
race condition, where state is changed in some files but not all files.  
Multiple files population mechanism will require more thoughts to keep the 
information transactional.  The first version does not add add container id, or 
arbitrary filename support to reduce the transaction commits and ensure the 
information propagation is idempotent from container point of view.

> Create an interface to provide cluster information to application
> -
>
> Key: YARN-8569
> URL: https://issues.apache.org/jira/browse/YARN-8569
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8569 YARN sysfs interface to provide cluster 
> information to application.pdf, YARN-8569.001.patch, YARN-8569.002.patch, 
> YARN-8569.003.patch, YARN-8569.004.patch, YARN-8569.005.patch, 
> YARN-8569.006.patch, YARN-8569.007.patch, YARN-8569.008.patch, 
> YARN-8569.009.patch, YARN-8569.010.patch, YARN-8569.011.patch, 
> YARN-8569.012.patch
>
>
> Some program requires container hostnames to be known for application to run. 
>  For example, distributed tensorflow requires launch_command that looks like:
> {code}
> # On ps0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=0
> # On ps1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=1
> # On worker0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=0
> # On worker1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=1
> {code}
> This is a bit cumbersome to orchestrate via Distributed Shell, or YARN 
> services launch_command.  In addition, the dynamic parameters do not work 
> with YARN flex command.  This is the classic pain point for application 
> developer attempt to automate system environment settings as parameter to end 
> user application.
> It would be great if YARN Docker integration can provide a simple option to 
> expose hostnames of the yarn service via a mounted file.  The file content 
> gets updated when flex command is performed.  This allows application 
> developer to consume system environment settings via a standard interface.  
> It is like /proc/devices for Linux, but for Hadoop.  This may involve 
> updating a file in distributed cache, and allow mounting of the file via 
> container-executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3854) Add localization support for docker images

2018-10-11 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-3854:

Attachment: Localization Support For Docker Images_002.pdf

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
> Attachments: Localization Support For Docker Images.pdf, Localization 
> Support For Docker Images_002.pdf, YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v3.pdf
>
>
> We need the ability to localize docker images when those images aren't 
> already available locally. There are various approaches that could be used 
> here with different trade-offs/issues : image archives on HDFS + docker load 
> ,  docker pull during the localization phase or (automatic) docker pull 
> during the run/launch phase. 
> We also need the ability to clean-up old/stale, unused images. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8569) Create an interface to provide cluster information to application

2018-10-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8569:

Attachment: (was: YARN-8569.012.patch)

> Create an interface to provide cluster information to application
> -
>
> Key: YARN-8569
> URL: https://issues.apache.org/jira/browse/YARN-8569
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8569 YARN sysfs interface to provide cluster 
> information to application.pdf, YARN-8569.001.patch, YARN-8569.002.patch, 
> YARN-8569.003.patch, YARN-8569.004.patch, YARN-8569.005.patch, 
> YARN-8569.006.patch, YARN-8569.007.patch, YARN-8569.008.patch, 
> YARN-8569.009.patch, YARN-8569.010.patch, YARN-8569.011.patch, 
> YARN-8569.012.patch
>
>
> Some program requires container hostnames to be known for application to run. 
>  For example, distributed tensorflow requires launch_command that looks like:
> {code}
> # On ps0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=0
> # On ps1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=1
> # On worker0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=0
> # On worker1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=1
> {code}
> This is a bit cumbersome to orchestrate via Distributed Shell, or YARN 
> services launch_command.  In addition, the dynamic parameters do not work 
> with YARN flex command.  This is the classic pain point for application 
> developer attempt to automate system environment settings as parameter to end 
> user application.
> It would be great if YARN Docker integration can provide a simple option to 
> expose hostnames of the yarn service via a mounted file.  The file content 
> gets updated when flex command is performed.  This allows application 
> developer to consume system environment settings via a standard interface.  
> It is like /proc/devices for Linux, but for Hadoop.  This may involve 
> updating a file in distributed cache, and allow mounting of the file via 
> container-executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8569) Create an interface to provide cluster information to application

2018-10-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8569:

Attachment: YARN-8569.012.patch

> Create an interface to provide cluster information to application
> -
>
> Key: YARN-8569
> URL: https://issues.apache.org/jira/browse/YARN-8569
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8569 YARN sysfs interface to provide cluster 
> information to application.pdf, YARN-8569.001.patch, YARN-8569.002.patch, 
> YARN-8569.003.patch, YARN-8569.004.patch, YARN-8569.005.patch, 
> YARN-8569.006.patch, YARN-8569.007.patch, YARN-8569.008.patch, 
> YARN-8569.009.patch, YARN-8569.010.patch, YARN-8569.011.patch, 
> YARN-8569.012.patch
>
>
> Some program requires container hostnames to be known for application to run. 
>  For example, distributed tensorflow requires launch_command that looks like:
> {code}
> # On ps0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=0
> # On ps1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=1
> # On worker0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=0
> # On worker1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=1
> {code}
> This is a bit cumbersome to orchestrate via Distributed Shell, or YARN 
> services launch_command.  In addition, the dynamic parameters do not work 
> with YARN flex command.  This is the classic pain point for application 
> developer attempt to automate system environment settings as parameter to end 
> user application.
> It would be great if YARN Docker integration can provide a simple option to 
> expose hostnames of the yarn service via a mounted file.  The file content 
> gets updated when flex command is performed.  This allows application 
> developer to consume system environment settings via a standard interface.  
> It is like /proc/devices for Linux, but for Hadoop.  This may involve 
> updating a file in distributed cache, and allow mounting of the file via 
> container-executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8569) Create an interface to provide cluster information to application

2018-10-11 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8569:

Attachment: YARN-8569.012.patch

> Create an interface to provide cluster information to application
> -
>
> Key: YARN-8569
> URL: https://issues.apache.org/jira/browse/YARN-8569
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8569 YARN sysfs interface to provide cluster 
> information to application.pdf, YARN-8569.001.patch, YARN-8569.002.patch, 
> YARN-8569.003.patch, YARN-8569.004.patch, YARN-8569.005.patch, 
> YARN-8569.006.patch, YARN-8569.007.patch, YARN-8569.008.patch, 
> YARN-8569.009.patch, YARN-8569.010.patch, YARN-8569.011.patch, 
> YARN-8569.012.patch
>
>
> Some program requires container hostnames to be known for application to run. 
>  For example, distributed tensorflow requires launch_command that looks like:
> {code}
> # On ps0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=0
> # On ps1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=ps --task_index=1
> # On worker0.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=0
> # On worker1.example.com:
> $ python trainer.py \
>  --ps_hosts=ps0.example.com:,ps1.example.com: \
>  --worker_hosts=worker0.example.com:,worker1.example.com: \
>  --job_name=worker --task_index=1
> {code}
> This is a bit cumbersome to orchestrate via Distributed Shell, or YARN 
> services launch_command.  In addition, the dynamic parameters do not work 
> with YARN flex command.  This is the classic pain point for application 
> developer attempt to automate system environment settings as parameter to end 
> user application.
> It would be great if YARN Docker integration can provide a simple option to 
> expose hostnames of the yarn service via a mounted file.  The file content 
> gets updated when flex command is performed.  This allows application 
> developer to consume system environment settings via a standard interface.  
> It is like /proc/devices for Linux, but for Hadoop.  This may involve 
> updating a file in distributed cache, and allow mounting of the file via 
> container-executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8860) Federation client intercepter class contains unwanted character

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646927#comment-16646927
 ] 

Hadoop QA commented on YARN-8860:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
3s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
31s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8860 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943476/YARN-8860.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0929560e8903 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 96d28b4 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22157/testReport/ |
| Max. process+thread count | 716 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22157/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |



[jira] [Commented] (YARN-8861) executorLock is misleading in ContainerLaunch

2018-10-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646883#comment-16646883
 ] 

Hudson commented on YARN-8861:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15180 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15180/])
YARN-8861. executorLock is misleading in ContainerLaunch. Contributed by 
(jlowe: rev e787d65a08f5d5245d2313fc34f2dde518bfaa5b)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java


> executorLock is misleading in ContainerLaunch
> -
>
> Key: YARN-8861
> URL: https://issues.apache.org/jira/browse/YARN-8861
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Trivial
>  Labels: docker
> Fix For: 3.2.0
>
> Attachments: YARN-8861.001.patch
>
>
> The name of the variable {{executorLock}} is confusing. Rename it to 
> {{launchLock}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-10-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646886#comment-16646886
 ] 

Hudson commented on YARN-8777:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15180 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15180/])
YARN-8777. Container Executor C binary change to execute interactive (billie: 
rev 96d28b4750f1088f2a10c83991b5a8149c97ef86)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/utils/docker-util.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test_docker_util.cc
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/util.h


> Container Executor C binary change to execute interactive docker command
> 
>
> Key: YARN-8777
> URL: https://issues.apache.org/jira/browse/YARN-8777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Fix For: 3.3.0
>
> Attachments: YARN-8777.001.patch, YARN-8777.002.patch, 
> YARN-8777.003.patch, YARN-8777.004.patch, YARN-8777.005.patch, 
> YARN-8777.006.patch, YARN-8777.007.patch, YARN-8777.008.patch
>
>
> Since Container Executor provides Container execution using the native 
> container-executor binary, we also need to make changes to accept new 
> “dockerExec” method to invoke the corresponding native function to execute 
> docker exec command to the running container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8710) Service AM should set a finite limit on NM container max retries

2018-10-11 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646884#comment-16646884
 ] 

Hudson commented on YARN-8710:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15180 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15180/])
YARN-8710. Service AM should set a finite limit on NM container max (billie: 
rev aeeb0389a58bf6bb4857bc0f246a63a4bd018d51)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/YarnServiceConf.java


> Service AM should set a finite limit on NM container max retries 
> -
>
> Key: YARN-8710
> URL: https://issues.apache.org/jira/browse/YARN-8710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8710.1.patch, YARN-8710.2.patch
>
>
> Container retries are currently set to a default of -1 in 
> AbstractProviderService.buildContainerRetry. If this is not overridden via 
> service spec with a finite value for yarn.service.container-failure.retry.max 
> , this causes infinite NM reties for the container for ALWAYS/ON_FAILURE 
> restart policy . Ideally it should try a finite number of time on the same NM 
> and subsequently Service AM can retry on another node.
> We can set this to default value of 3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7274) Ability to disable elasticity at leaf queue level

2018-10-11 Thread Eric Payne (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-7274:
-
Fix Version/s: 3.0.4
   2.8.5
   2.9.2
   2.10.0

> Ability to disable elasticity at leaf queue level
> -
>
> Key: YARN-7274
> URL: https://issues.apache.org/jira/browse/YARN-7274
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Scott Brokaw
>Assignee: Zian Chen
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 2.9.2, 2.8.5, 3.0.4
>
> Attachments: YARN-7274.2.patch, YARN-7274.wip.1.patch
>
>
> The 
> [documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html]
>  defines yarn.scheduler.capacity..maximum-capacity as "Maximum 
> queue capacity in percentage (%) as a float. This limits the elasticity for 
> applications in the queue. Defaults to -1 which disables it."
> However, setting this value to -1 sets maximum capacity to 100% but I thought 
> (perhaps incorrectly) that the intention of the -1 setting is that it would 
> disable elasticity.  This is confirmed looking at the code:
> {code:java}
> public static final float MAXIMUM_CAPACITY_VALUE = 100;
> public static final float DEFAULT_MAXIMUM_CAPACITY_VALUE = -1.0f;
> ..
> maxCapacity = (maxCapacity == DEFAULT_MAXIMUM_CAPACITY_VALUE) ? 
> MAXIMUM_CAPACITY_VALUE : maxCapacity;
> {code}
> The sum of yarn.scheduler.capacity..capacity for all queues, at 
> each level, must be equal to 100 but for 
> yarn.scheduler.capacity..maximum-capacity this value is actually 
> a percentage of the entire cluster not just the parent queue.  Yet it can not 
> be set lower then the leaf queue's capacity setting. This seems to make it 
> impossible to disable elasticity at a leaf queue level.
> This improvement is proposing that YARN have the ability to have elasticity 
> disabled at a leaf queue level even if a parent queue permits elasticity by 
> having a yarn.scheduler.capacity..maximum-capacity greater then 
> it's yarn.scheduler.capacity..capacity



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8775) TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File modifications

2018-10-11 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646855#comment-16646855
 ] 

Haibo Chen commented on YARN-8775:
--

I see.  That is indeed an issue.

Thinking about this a bit, a sub-optimal way to disable the periodical disk 
check would be to set DISK_HEALTH_CHECK_INTERVAL  to a sufficiently large 
number (say 1 day). We would still enable disk checker, but the periodical 
check is postponed beyond the test lifetime, hence never executed.

we can then make checkDirs() public and call it explicit from the unit tests.

> TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File 
> modifications
> --
>
> Key: YARN-8775
> URL: https://issues.apache.org/jira/browse/YARN-8775
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 3.0.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Major
> Attachments: YARN-8775.001.patch, YARN-8775.002.patch
>
>
> The test can fail sometimes when file operations were done during the check 
> done by the thread in _LocalDirsHandlerService._
> {code:java}
> java.lang.AssertionError: NodeManager could not identify disk failure.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:202)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99)
> Stderr
> 2018-09-13 08:21:49,822 INFO [main] server.TestDiskFailures 
> (TestDiskFailures.java:prepareDirToFail(277)) - Prepared 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1
>  to fail.
> 2018-09-13 08:21:49,823 INFO [main] server.TestDiskFailures 
> (TestDiskFailures.java:prepareDirToFail(277)) - Prepared 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3
>  to fail.
> 2018-09-13 08:21:49,823 WARN [DiskHealthMonitor-Timer] 
> nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(283)) - 
> Directory 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1
>  error, Not a directory: 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1,
>  removing from list of valid directories
> 2018-09-13 08:21:49,824 WARN [DiskHealthMonitor-Timer] 
> localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:initializeLogDir(1329)) - Could not 
> initialize log dir 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3
> java.io.FileNotFoundException: Destination exists and is not a directory: 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:515)
> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:496)
> at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1081)
> at 
> org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:178)
> at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:205)
> at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:747)
> at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:743)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:743)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.initializeLogDir(ResourceLocalizationService.java:1324)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.initializeLogDirs(ResourceLocalizationService.java:1318)
> at 
> 

[jira] [Commented] (YARN-8842) Update QueueMetrics with custom resource values

2018-10-11 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646831#comment-16646831
 ] 

Szilard Nemeth commented on YARN-8842:
--

Based on our offline discussion with [~wilfreds], here are the changes included 
in patch009:

- Removed getMaxAllocationUtilization from QueueMetricsForCustomResources and 
its related testcases and moved the code to YARN-8059 patch002
- Removed the unnecessary check of 
{{queueMetricsForCustomResources.isThereAnyAllocatedResource}} in 
{{QueueMetrics#getAllocatedResources}} as {{Resource.newInstance}} already 
handles an empty-map.
- Fixed all the checkstyle issues, except "'xxx' hides a field. [HiddenField]" 
kind of issues. This could only disappear if the parameter has a different name 
than the fields, which does not make sense for me.

> Update QueueMetrics with custom resource values 
> 
>
> Key: YARN-8842
> URL: https://issues.apache.org/jira/browse/YARN-8842
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8842.001.patch, YARN-8842.002.patch, 
> YARN-8842.003.patch, YARN-8842.004.patch, YARN-8842.005.patch, 
> YARN-8842.006.patch, YARN-8842.007.patch, YARN-8842.008.patch, 
> YARN-8842.009.patch
>
>
> This is the 2nd dependent jira of YARN-8059.
> As updating the metrics is an independent step from handling preemption, this 
> jira only deals with the queue metrics update of custom resources.
> The following metrics should be updated: 
> * allocated resources
> * available resources
> * pending resources
> * reserved resources
> * aggregate seconds preempted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8842) Update QueueMetrics with custom resource values

2018-10-11 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8842:
-
Attachment: YARN-8842.009.patch

> Update QueueMetrics with custom resource values 
> 
>
> Key: YARN-8842
> URL: https://issues.apache.org/jira/browse/YARN-8842
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8842.001.patch, YARN-8842.002.patch, 
> YARN-8842.003.patch, YARN-8842.004.patch, YARN-8842.005.patch, 
> YARN-8842.006.patch, YARN-8842.007.patch, YARN-8842.008.patch, 
> YARN-8842.009.patch
>
>
> This is the 2nd dependent jira of YARN-8059.
> As updating the metrics is an independent step from handling preemption, this 
> jira only deals with the queue metrics update of custom resources.
> The following metrics should be updated: 
> * allocated resources
> * available resources
> * pending resources
> * reserved resources
> * aggregate seconds preempted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8448) AM HTTPS Support

2018-10-11 Thread Robert Kanter (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated YARN-8448:

Attachment: YARN-8448.008.patch

> AM HTTPS Support
> 
>
> Key: YARN-8448
> URL: https://issues.apache.org/jira/browse/YARN-8448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8448.001.patch, YARN-8448.002.patch, 
> YARN-8448.003.patch, YARN-8448.004.patch, YARN-8448.005.patch, 
> YARN-8448.006.patch, YARN-8448.007.patch, YARN-8448.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8448) AM HTTPS Support

2018-10-11 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646811#comment-16646811
 ] 

Robert Kanter commented on YARN-8448:
-

- The cc warning isn't a problem.  It's complaining that I'm using a variable 
in fprintf, but that variable is always hardcoded in the calling function, so 
it's effectively not a variable.
- {{TestIncreaseAllocationExpirer}} failure is unrelated
- I'm not sure why cetest failed (it doesn't give any details), and it passes 
on my machine

The 008 patch:
- Rebased on latest trunk
- Fixes the relevant checkstyle warnings

> AM HTTPS Support
> 
>
> Key: YARN-8448
> URL: https://issues.apache.org/jira/browse/YARN-8448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Attachments: YARN-8448.001.patch, YARN-8448.002.patch, 
> YARN-8448.003.patch, YARN-8448.004.patch, YARN-8448.005.patch, 
> YARN-8448.006.patch, YARN-8448.007.patch, YARN-8448.008.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8856) TestTimelineReaderWebServicesHBaseStorage tests failing with NoClassDefFoundError

2018-10-11 Thread Sushil Ks (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646798#comment-16646798
 ] 

Sushil Ks commented on YARN-8856:
-

HI [~rohithsharma], as discussed in our weekly ATSv2 call, will be mocking the 
metrics in order to skip loading metrics while running hbase tests.

> TestTimelineReaderWebServicesHBaseStorage tests failing with 
> NoClassDefFoundError
> -
>
> Key: YARN-8856
> URL: https://issues.apache.org/jira/browse/YARN-8856
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Sushil Ks
>Priority: Major
>
> TestTimelineReaderWebServicesHBaseStorage has been failing in nightly builds 
> with NoClassDefFoundError in the tests.  Sample error and stacktrace to 
> follow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-8856) TestTimelineReaderWebServicesHBaseStorage tests failing with NoClassDefFoundError

2018-10-11 Thread Sushil Ks (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushil Ks reassigned YARN-8856:
---

Assignee: Sushil Ks

> TestTimelineReaderWebServicesHBaseStorage tests failing with 
> NoClassDefFoundError
> -
>
> Key: YARN-8856
> URL: https://issues.apache.org/jira/browse/YARN-8856
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jason Lowe
>Assignee: Sushil Ks
>Priority: Major
>
> TestTimelineReaderWebServicesHBaseStorage has been failing in nightly builds 
> with NoClassDefFoundError in the tests.  Sample error and stacktrace to 
> follow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8869) YARN client might not work correctly with RM Rest API for Kerberos authentication

2018-10-11 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646779#comment-16646779
 ] 

Eric Yang commented on YARN-8869:
-

This issue seems to happen in certain environments, but not all environments.  
It maybe challenge to write a unit test for this, but I will investigate.

> YARN client might not work correctly with RM Rest API for Kerberos 
> authentication
> -
>
> Key: YARN-8869
> URL: https://issues.apache.org/jira/browse/YARN-8869
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 3.1.0, 3.2.0, 3.1.1
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Attachments: YARN-8869.001.patch
>
>
> ApiServiceClient uses WebResource instead of Builder to pass Kerberos 
> authorization header.  This may not work sometimes, and it is because 
> WebResource.header() could be bind mashed new Builder instance in some 
> condition.  This article explained the details: 
> https://juristr.com/blog/2015/05/jersey-webresource-ignores-headers/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646728#comment-16646728
 ] 

Hadoop QA commented on YARN-8870:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 16s{color} | {color:orange} The patch generated 114 new + 106 unchanged - 0 
fixed = 220 total (was 106) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 46 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
27s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
30s{color} | {color:red} The patch generated 9 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8870 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943460/YARN-8870.001.patch |
| Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux bf623eb6182c 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / ee816f1 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| shelldocs | 
https://builds.apache.org/job/PreCommit-YARN-Build/22156/artifact/out/diff-patch-shelldocs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22156/artifact/out/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22156/artifact/out/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22156/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/22156/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 306 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine 
U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22156/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: 

[jira] [Commented] (YARN-8710) Service AM should set a finite limit on NM container max retries

2018-10-11 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646711#comment-16646711
 ] 

Billie Rinaldi commented on YARN-8710:
--

+1 for patch 2. Thanks, [~suma.shivaprasad]!

> Service AM should set a finite limit on NM container max retries 
> -
>
> Key: YARN-8710
> URL: https://issues.apache.org/jira/browse/YARN-8710
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-native-services
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: YARN-8710.1.patch, YARN-8710.2.patch
>
>
> Container retries are currently set to a default of -1 in 
> AbstractProviderService.buildContainerRetry. If this is not overridden via 
> service spec with a finite value for yarn.service.container-failure.retry.max 
> , this causes infinite NM reties for the container for ALWAYS/ON_FAILURE 
> restart policy . Ideally it should try a finite number of time on the same NM 
> and subsequently Service AM can retry on another node.
> We can set this to default value of 3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8860) Federation client intercepter class contains unwanted character

2018-10-11 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-8860:

Attachment: YARN-8860.001.patch

> Federation client intercepter class contains unwanted character
> ---
>
> Key: YARN-8860
> URL: https://issues.apache.org/jira/browse/YARN-8860
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.1
>Reporter: Rakesh Shah
>Assignee: Abhishek Modi
>Priority: Minor
> Attachments: YARN-8860.001.patch
>
>
> {{*The FederationClientIntercepter class contains some unwanted character 
> inside summary part of the methods.*}}
>  
>  
> {noformat}
> - * The Client submits an application to the Router. 鈥?The Router selects one
> - * SubCluster to forward the request. 鈥?The Router inserts a tuple into
> - * StateStore with the selected SubCluster (e.g. SC1) and the appId. 鈥?The
> - * State Store replies with the selected SubCluster (e.g. SC1). 鈥?The Router
> + * The Client submits an application to the Router. 閳ワ拷 The Router selects 
> one
> + * SubCluster to forward the request. 閳ワ拷 The Router inserts a tuple into
> + * StateStore with the selected SubCluster (e.g. SC1) and the appId. 閳ワ拷 The
> + * State Store replies with the selected SubCluster (e.g. SC1). 閳ワ拷 The 
> Router{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8777) Container Executor C binary change to execute interactive docker command

2018-10-11 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646670#comment-16646670
 ] 

Billie Rinaldi commented on YARN-8777:
--

+1 for patch 8. Thanks for the patch [~eyang] and for the reviews, [~ebadger] 
and [~Zian Chen]!

> Container Executor C binary change to execute interactive docker command
> 
>
> Key: YARN-8777
> URL: https://issues.apache.org/jira/browse/YARN-8777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zian Chen
>Assignee: Eric Yang
>Priority: Major
>  Labels: Docker
> Attachments: YARN-8777.001.patch, YARN-8777.002.patch, 
> YARN-8777.003.patch, YARN-8777.004.patch, YARN-8777.005.patch, 
> YARN-8777.006.patch, YARN-8777.007.patch, YARN-8777.008.patch
>
>
> Since Container Executor provides Container execution using the native 
> container-executor binary, we also need to make changes to accept new 
> “dockerExec” method to invoke the corresponding native function to execute 
> docker exec command to the running container.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8861) executorLock is misleading in ContainerLaunch

2018-10-11 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646664#comment-16646664
 ] 

Jason Lowe commented on YARN-8861:
--

Thanks for the patch!  +1 lgtm.  Committing this.

> executorLock is misleading in ContainerLaunch
> -
>
> Key: YARN-8861
> URL: https://issues.apache.org/jira/browse/YARN-8861
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Trivial
>  Labels: docker
> Attachments: YARN-8861.001.patch
>
>
> The name of the variable {{executorLock}} is confusing. Rename it to 
> {{launchLock}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7274) Ability to disable elasticity at leaf queue level

2018-10-11 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646624#comment-16646624
 ] 

Eric Payne commented on YARN-7274:
--

this backports cleanly all the way back to 2.8. I will do that backport. Once 
I'm done, I will update the fix versions.

> Ability to disable elasticity at leaf queue level
> -
>
> Key: YARN-7274
> URL: https://issues.apache.org/jira/browse/YARN-7274
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Scott Brokaw
>Assignee: Zian Chen
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: YARN-7274.2.patch, YARN-7274.wip.1.patch
>
>
> The 
> [documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html]
>  defines yarn.scheduler.capacity..maximum-capacity as "Maximum 
> queue capacity in percentage (%) as a float. This limits the elasticity for 
> applications in the queue. Defaults to -1 which disables it."
> However, setting this value to -1 sets maximum capacity to 100% but I thought 
> (perhaps incorrectly) that the intention of the -1 setting is that it would 
> disable elasticity.  This is confirmed looking at the code:
> {code:java}
> public static final float MAXIMUM_CAPACITY_VALUE = 100;
> public static final float DEFAULT_MAXIMUM_CAPACITY_VALUE = -1.0f;
> ..
> maxCapacity = (maxCapacity == DEFAULT_MAXIMUM_CAPACITY_VALUE) ? 
> MAXIMUM_CAPACITY_VALUE : maxCapacity;
> {code}
> The sum of yarn.scheduler.capacity..capacity for all queues, at 
> each level, must be equal to 100 but for 
> yarn.scheduler.capacity..maximum-capacity this value is actually 
> a percentage of the entire cluster not just the parent queue.  Yet it can not 
> be set lower then the leaf queue's capacity setting. This seems to make it 
> impossible to disable elasticity at a leaf queue level.
> This improvement is proposing that YARN have the ability to have elasticity 
> disabled at a leaf queue level even if a parent queue permits elasticity by 
> having a yarn.scheduler.capacity..maximum-capacity greater then 
> it's yarn.scheduler.capacity..capacity



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8865) RMStateStore contains large number of expired RMDelegationToken

2018-10-11 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646622#comment-16646622
 ] 

Wilfred Spiegelenburg commented on YARN-8865:
-

I am not sure what has happened in the environment or even if the two cleanup 
times were set differently (key and token have their own interval). I just have 
the zookeeper DB to work with no logs from that time frame.

The ADTSM method {{addPersistedDelegationToken}} has a safe guard already: the 
secret manager cannot be running at the time we restore. That removes a lot of 
the problem. The other side (specifically for the NN) HDFS uses its own version 
of {{addPersistedDelegationToken}}. It has its own implementation in 
DelegationTokenSecretManager (defined in 
org.apache.hadoop.hdfs.security.token.delegation). The HDFS side should thus 
not be affected by the change.
The other three uses are YARN RM, YARN ATS and MR JHS. Based on what I can see 
none of them have an issue.

If the change is still considered too risky I think the option to still add 
them with a null password is the best solution.


> RMStateStore contains large number of expired RMDelegationToken
> ---
>
> Key: YARN-8865
> URL: https://issues.apache.org/jira/browse/YARN-8865
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8865.001.patch, YARN-8865.002.patch
>
>
> When the RM state store is restored expired delegation tokens are restored 
> and added to the system. These expired tokens do not get cleaned up or 
> removed. The exact reason why the tokens are still in the store is not clear. 
> We have seen as many as 250,000 tokens in the store some of which were 2 
> years old.
> This has two side effects:
> * for the zookeeper store this leads to a jute buffer exhaustion issue and 
> prevents the RM from becoming active.
> * restore takes longer than needed and heap usage is higher than it should be
> We should not restore already expired tokens since they cannot be renewed or 
> used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646296#comment-16646296
 ] 

Xun Liu edited comment on YARN-8870 at 10/11/18 3:22 PM:
-

[~sunilg],OK. I added design document: 
https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing
 


was (Author: liuxun323):
[~sunilg],OK. I added a manual to the attachment. [^InstallationScriptEN.md] 
(English version of the document) and [^InstallationScriptCN.md] (Chinese 
version of the document)

> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: YARN-8870.001.patch
>
>
> In order to reduce the deployment difficulty of Hadoop
> {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel 
> modification and other components, I specially developed this installation 
> script to deploy Hadoop \{Submarine}
> runtime environment, providing one-click installation Scripts, which can also 
> be used to install, uninstall, start, and stop individual components step by 
> step.
>  
> {color:#ff}design d{color}{color:#FF}ocument:{color} 
> [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xun Liu updated YARN-8870:
--
Description: 
In order to reduce the deployment difficulty of Hadoop

{Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel 
modification and other components, I specially developed this installation 
script to deploy Hadoop \{Submarine}

runtime environment, providing one-click installation Scripts, which can also 
be used to install, uninstall, start, and stop individual components step by 
step.

 

{color:#ff}design d{color}{color:#FF}ocument:{color} 
[https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing]
 

  was:In order to reduce the deployment difficulty of Hadoop {Submarine} DNS, 
Docker, GPU, Network, graphics card, operating system kernel modification and 
other components, I specially developed this installation script to deploy 
Hadoop {Submarine} runtime environment, providing one-click installation 
Scripts, which can also be used to install, uninstall, start, and stop 
individual components step by step.


> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: YARN-8870.001.patch
>
>
> In order to reduce the deployment difficulty of Hadoop
> {Submarine} DNS, Docker, GPU, Network, graphics card, operating system kernel 
> modification and other components, I specially developed this installation 
> script to deploy Hadoop \{Submarine}
> runtime environment, providing one-click installation Scripts, which can also 
> be used to install, uninstall, start, and stop individual components step by 
> step.
>  
> {color:#ff}design d{color}{color:#FF}ocument:{color} 
> [https://docs.google.com/document/d/1muCTGFuUXUvM4JaDYjKqX5liQEg-AsNgkxfLMIFxYHU/edit?usp=sharing]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xun Liu updated YARN-8870:
--
Attachment: (was: YARN-8870.001.patch)

> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: YARN-8870.001.patch
>
>
> In order to reduce the deployment difficulty of Hadoop {Submarine} DNS, 
> Docker, GPU, Network, graphics card, operating system kernel modification and 
> other components, I specially developed this installation script to deploy 
> Hadoop {Submarine} runtime environment, providing one-click installation 
> Scripts, which can also be used to install, uninstall, start, and stop 
> individual components step by step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xun Liu updated YARN-8870:
--
Attachment: YARN-8870.001.patch

> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: YARN-8870.001.patch
>
>
> In order to reduce the deployment difficulty of Hadoop {Submarine} DNS, 
> Docker, GPU, Network, graphics card, operating system kernel modification and 
> other components, I specially developed this installation script to deploy 
> Hadoop {Submarine} runtime environment, providing one-click installation 
> Scripts, which can also be used to install, uninstall, start, and stop 
> individual components step by step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xun Liu updated YARN-8870:
--
Attachment: (was: YARN-8870.001.patch)

> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
>
> In order to reduce the deployment difficulty of Hadoop {Submarine} DNS, 
> Docker, GPU, Network, graphics card, operating system kernel modification and 
> other components, I specially developed this installation script to deploy 
> Hadoop {Submarine} runtime environment, providing one-click installation 
> Scripts, which can also be used to install, uninstall, start, and stop 
> individual components step by step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xun Liu updated YARN-8870:
--
Attachment: (was: InstallationScriptEN.md)

> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: YARN-8870.001.patch
>
>
> In order to reduce the deployment difficulty of Hadoop {Submarine} DNS, 
> Docker, GPU, Network, graphics card, operating system kernel modification and 
> other components, I specially developed this installation script to deploy 
> Hadoop {Submarine} runtime environment, providing one-click installation 
> Scripts, which can also be used to install, uninstall, start, and stop 
> individual components step by step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xun Liu updated YARN-8870:
--
Attachment: (was: InstallationScriptCN.md)

> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: YARN-8870.001.patch
>
>
> In order to reduce the deployment difficulty of Hadoop {Submarine} DNS, 
> Docker, GPU, Network, graphics card, operating system kernel modification and 
> other components, I specially developed this installation script to deploy 
> Hadoop {Submarine} runtime environment, providing one-click installation 
> Scripts, which can also be used to install, uninstall, start, and stop 
> individual components step by step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8865) RMStateStore contains large number of expired RMDelegationToken

2018-10-11 Thread Daryn Sharp (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646515#comment-16646515
 ] 

Daryn Sharp commented on YARN-8865:
---

Good job.  That explains why the secret manager doesn't remove them.  What's 
interesting is secret keys are supposed to outlive their tokens.  Were secret 
keys manually deleted?  Regardless the secret manager should be able to recover 
its state.

The patch is a high risky change for a common class.  All secret managers are 
not be equipped to handle mutation during loading.  Case in point: The NN 
generates an edit to remove tokens.  Edits cannot be generated while replaying 
edits (restoring state).  Fundamentally a HA standby cannot modify state.  
Similar issues probably exist for other secret managers.

Perhaps the lowest risk change is add tokens with an invalid key anyway.  Set 
the password to null.  Authentication will fail, and should allow the 
expiration thread to correctly remove the tokens.

Or the lowest risk change is modify the RMDTSM to handle removal while 
restoring state.

> RMStateStore contains large number of expired RMDelegationToken
> ---
>
> Key: YARN-8865
> URL: https://issues.apache.org/jira/browse/YARN-8865
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8865.001.patch, YARN-8865.002.patch
>
>
> When the RM state store is restored expired delegation tokens are restored 
> and added to the system. These expired tokens do not get cleaned up or 
> removed. The exact reason why the tokens are still in the store is not clear. 
> We have seen as many as 250,000 tokens in the store some of which were 2 
> years old.
> This has two side effects:
> * for the zookeeper store this leads to a jute buffer exhaustion issue and 
> prevents the RM from becoming active.
> * restore takes longer than needed and heap usage is higher than it should be
> We should not restore already expired tokens since they cannot be renewed or 
> used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7225) Add queue and partition info to RM audit log

2018-10-11 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646487#comment-16646487
 ] 

Eric Payne commented on YARN-7225:
--

The ASF license warning is not related to this patch. It is complaining about  
{{hadoop-yarn-server-applicationhistoryservice/dependency-reduced-pom.xml}}

The unit tests succeed in my local environment.

> Add queue and partition info to RM audit log
> 
>
> Key: YARN-7225
> URL: https://issues.apache.org/jira/browse/YARN-7225
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.9.1, 2.8.4, 3.0.2, 3.1.1
>Reporter: Jonathan Hung
>Assignee: Eric Payne
>Priority: Major
> Attachments: YARN-7225.001.patch, YARN-7225.002.patch, 
> YARN-7225.003.patch, YARN-7225.004.patch, YARN-7225.branch-2.8.001.patch
>
>
> Right now RM audit log has fields such as user, ip, resource, etc. Having 
> queue and partition  is useful for resource tracking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3854) Add localization support for docker images

2018-10-11 Thread Eric Badger (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-3854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646474#comment-16646474
 ] 

Eric Badger commented on YARN-3854:
---

I like that proposal. We'll need to make sure that there aren't any weird race 
conditions with containers being stopped in the YARN sense, but not the docker 
sense. But other than that, I think this gives us a better idea of the state of 
the node. Makes me worry a lot less about force removing images.

> Add localization support for docker images
> --
>
> Key: YARN-3854
> URL: https://issues.apache.org/jira/browse/YARN-3854
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
> Attachments: Localization Support For Docker Images.pdf, 
> YARN-3854-branch-2.8.001.patch, 
> YARN-3854_Localization_support_for_Docker_image_v1.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v2.pdf, 
> YARN-3854_Localization_support_for_Docker_image_v3.pdf
>
>
> We need the ability to localize docker images when those images aren't 
> already available locally. There are various approaches that could be used 
> here with different trade-offs/issues : image archives on HDFS + docker load 
> ,  docker pull during the localization phase or (automatic) docker pull 
> during the run/launch phase. 
> We also need the ability to clean-up old/stale, unused images. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8775) TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File modifications

2018-10-11 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646449#comment-16646449
 ] 

Antal Bálint Steinbach commented on YARN-8775:
--

Hi [~haibochen] ,

Thanks for the review.

The problem is that if I disable periodical health check this will be always 
true:
{code:java}
public boolean areDisksHealthy() {
 if (!isDiskHealthCheckerEnabled) {
 return true;
 }
...
}{code}
And we will lose this part of the test:
{code:java}
Assert.assertEquals("Node's health in terms of disks is wrong",
isHealthy, dirsHandler.areDisksHealthy());{code}
The reason for the retry is that it is possible that the folder check is 
happening during the prepareDirToFail, so it can fail in that case. Obviously, 
your idea fixes these kinds of concurrency issues, what makes this test so 
fragile.

> TestDiskFailures.testLocalDirsFailures sometimes can fail on concurrent File 
> modifications
> --
>
> Key: YARN-8775
> URL: https://issues.apache.org/jira/browse/YARN-8775
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 3.0.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Major
> Attachments: YARN-8775.001.patch, YARN-8775.002.patch
>
>
> The test can fail sometimes when file operations were done during the check 
> done by the thread in _LocalDirsHandlerService._
> {code:java}
> java.lang.AssertionError: NodeManager could not identify disk failure.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:202)
>   at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99)
> Stderr
> 2018-09-13 08:21:49,822 INFO [main] server.TestDiskFailures 
> (TestDiskFailures.java:prepareDirToFail(277)) - Prepared 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1
>  to fail.
> 2018-09-13 08:21:49,823 INFO [main] server.TestDiskFailures 
> (TestDiskFailures.java:prepareDirToFail(277)) - Prepared 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3
>  to fail.
> 2018-09-13 08:21:49,823 WARN [DiskHealthMonitor-Timer] 
> nodemanager.DirectoryCollection (DirectoryCollection.java:checkDirs(283)) - 
> Directory 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1
>  error, Not a directory: 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_1,
>  removing from list of valid directories
> 2018-09-13 08:21:49,824 WARN [DiskHealthMonitor-Timer] 
> localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:initializeLogDir(1329)) - Could not 
> initialize log dir 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3
> java.io.FileNotFoundException: Destination exists and is not a directory: 
> /tmp/dist-test-taskjUrf0_/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/org.apache.hadoop.yarn.server.TestDiskFailures/org.apache.hadoop.yarn.server.TestDiskFailures-logDir-nm-0_3
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:515)
> at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:496)
> at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:1081)
> at 
> org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:178)
> at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:205)
> at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:747)
> at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:743)
> at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
> at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:743)
> at 
> 

[jira] [Commented] (YARN-8854) [Hadoop YARN Common] Update jquery datatable version references

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646443#comment-16646443
 ] 

Hadoop QA commented on YARN-8854:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 17m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
34m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 23m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 42s{color} | {color:orange} root: The patch generated 1 new + 10 unchanged - 
0 fixed = 11 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 325 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 51s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}184m 21s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
48s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}363m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks 
|
|   | hadoop.hdfs.server.balancer.TestBalancerRPCDelay |
|   | hadoop.hdfs.server.datanode.checker.TestDatasetVolumeCheckerTimeout |
|   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | 

[jira] [Commented] (YARN-8865) RMStateStore contains large number of expired RMDelegationToken

2018-10-11 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646374#comment-16646374
 ] 

Wilfred Spiegelenburg commented on YARN-8865:
-

junit test failures are not related to the patch as far as I can see.
The asf license is from a left over test file which is not related either

> RMStateStore contains large number of expired RMDelegationToken
> ---
>
> Key: YARN-8865
> URL: https://issues.apache.org/jira/browse/YARN-8865
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Attachments: YARN-8865.001.patch, YARN-8865.002.patch
>
>
> When the RM state store is restored expired delegation tokens are restored 
> and added to the system. These expired tokens do not get cleaned up or 
> removed. The exact reason why the tokens are still in the store is not clear. 
> We have seen as many as 250,000 tokens in the store some of which were 2 
> years old.
> This has two side effects:
> * for the zookeeper store this leads to a jute buffer exhaustion issue and 
> prevents the RM from becoming active.
> * restore takes longer than needed and heap usage is higher than it should be
> We should not restore already expired tokens since they cannot be renewed or 
> used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646367#comment-16646367
 ] 

Hadoop QA commented on YARN-5742:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 4 new + 
35 unchanged - 1 fixed = 39 total (was 36) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 46s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
9s{color} | {color:green} hadoop-yarn-server-applicationhistoryservice in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
11s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-5742 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943392/YARN-5742.04.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7e7b4510bd28 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-8864) NM incorrectly logs container user as the user who sent a stop container request in its audit log

2018-10-11 Thread Wilfred Spiegelenburg (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646360#comment-16646360
 ] 

Wilfred Spiegelenburg commented on YARN-8864:
-

[~haibochen] If I understand you correctly we should be logging the remote user 
and not the container user in the audit message.
I looked at the other messages that we log in the {{ContainerManagerImpl}}. We 
use an "unknown user" when we authorise, I think in both cases we should use 
the remote user from the remote UGI constructed.

I'll add a patch when I have finished the test runs

> NM incorrectly logs container user as the user who sent a stop container 
> request in its audit log
> -
>
> Key: YARN-8864
> URL: https://issues.apache.org/jira/browse/YARN-8864
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.2.0
>Reporter: Haibo Chen
>Assignee: Wilfred Spiegelenburg
>Priority: Major
>
> As in  ContainerManagerImpl.java
> {code:java}
> protected void stopContainerInternal(ContainerId containerID)
>   throws YarnException, IOException { 
>     ...   
> NMAuditLogger.logSuccess(container.getUser(), 
> AuditConstants.STOP_CONTAINER,
>"ContainerManageImpl", 
> containerID.getApplicationAttemptId().getApplicationId(), containerID);
> }{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646331#comment-16646331
 ] 

Hadoop QA commented on YARN-8870:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m 17s{color} | {color:orange} The patch generated 114 new + 106 unchanged - 0 
fixed = 220 total (was 106) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 46 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-submarine in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
28s{color} | {color:red} The patch generated 9 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8870 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12943400/YARN-8870.001.patch |
| Optional Tests |  dupname  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 86c481680c61 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 
07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 8a37983 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| shelldocs | 
https://builds.apache.org/job/PreCommit-YARN-Build/22153/artifact/out/diff-patch-shelldocs.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22153/artifact/out/whitespace-eol.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/22153/artifact/out/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/22153/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/22153/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 412 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine 
U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-submarine |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22153/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: 

[jira] [Commented] (YARN-8854) [Hadoop YARN Common] Update jquery datatable version references

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646322#comment-16646322
 ] 

Hadoop QA commented on YARN-8854:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
29m 48s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 35s{color} | {color:orange} root: The patch generated 1 new + 10 unchanged - 
0 fixed = 11 total (was 10) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 325 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}146m 25s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
50s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}304m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestPread |
|   | hadoop.hdfs.TestLeaseRecovery2 |
|   | 
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | YARN-8854 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646303#comment-16646303
 ] 

Hadoop QA commented on YARN-8870:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} YARN-8870 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8870 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/22155/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: InstallationScriptCN.md, InstallationScriptEN.md, 
> YARN-8870.001.patch
>
>
> In order to reduce the deployment difficulty of Hadoop {Submarine} DNS, 
> Docker, GPU, Network, graphics card, operating system kernel modification and 
> other components, I specially developed this installation script to deploy 
> Hadoop {Submarine} runtime environment, providing one-click installation 
> Scripts, which can also be used to install, uninstall, start, and stop 
> individual components step by step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646296#comment-16646296
 ] 

Xun Liu edited comment on YARN-8870 at 10/11/18 11:27 AM:
--

[~sunilg],OK. I added a manual to the attachment. [^InstallationScriptEN.md] 
(English version of the document) and [^InstallationScriptCN.md] (Chinese 
version of the document)


was (Author: liuxun323):
[~sunilg],OK. I added a manual to the attachment. [^InstallationScriptEN.md]

> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: InstallationScriptCN.md, InstallationScriptEN.md, 
> YARN-8870.001.patch
>
>
> In order to reduce the deployment difficulty of Hadoop {Submarine} DNS, 
> Docker, GPU, Network, graphics card, operating system kernel modification and 
> other components, I specially developed this installation script to deploy 
> Hadoop {Submarine} runtime environment, providing one-click installation 
> Scripts, which can also be used to install, uninstall, start, and stop 
> individual components step by step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8870) Add submarine installation scripts

2018-10-11 Thread Xun Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xun Liu updated YARN-8870:
--
Attachment: InstallationScriptCN.md

> Add submarine installation scripts
> --
>
> Key: YARN-8870
> URL: https://issues.apache.org/jira/browse/YARN-8870
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xun Liu
>Assignee: Xun Liu
>Priority: Major
> Attachments: InstallationScriptCN.md, InstallationScriptEN.md, 
> YARN-8870.001.patch
>
>
> In order to reduce the deployment difficulty of Hadoop {Submarine} DNS, 
> Docker, GPU, Network, graphics card, operating system kernel modification and 
> other components, I specially developed this installation script to deploy 
> Hadoop {Submarine} runtime environment, providing one-click installation 
> Scripts, which can also be used to install, uninstall, start, and stop 
> individual components step by step.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >