[jira] [Commented] (YARN-8594) [UI2] Show the current logged in user in UI2

2018-08-01 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566386#comment-16566386
 ] 

Sunil Govindan commented on YARN-8594:
--

One more minor nit

# There are some commented code, pls remove the same.

> [UI2] Show the current logged in user in UI2
> 
>
> Key: YARN-8594
> URL: https://issues.apache.org/jira/browse/YARN-8594
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8594.001.patch, YARN-8594.002.patch, 
> YARN-8594.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8594) [UI2] Show the current logged in user in UI2

2018-08-01 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566385#comment-16566385
 ] 

Sunil Govindan commented on YARN-8594:
--

+1. Looks good. Will commit shortly

> [UI2] Show the current logged in user in UI2
> 
>
> Key: YARN-8594
> URL: https://issues.apache.org/jira/browse/YARN-8594
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8594.001.patch, YARN-8594.002.patch, 
> YARN-8594.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-8594) [UI2] Show the current logged in user in UI2

2018-08-01 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8594:
-
Comment: was deleted

(was: +1. Looks good. Will commit shortly)

> [UI2] Show the current logged in user in UI2
> 
>
> Key: YARN-8594
> URL: https://issues.apache.org/jira/browse/YARN-8594
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8594.001.patch, YARN-8594.002.patch, 
> YARN-8594.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8612) Fix NM Collector Service Port issue in YarnConfiguration

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566345#comment-16566345
 ] 

genericqa commented on YARN-8612:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  0s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8612 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934008/YARN-8612.v1.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 66a914ba6bb9 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 735b492 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21481/testReport/ |
| Max. process+thread count | 440 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21481/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix NM Collector Service 

[jira] [Commented] (YARN-8612) Fix NM Collector Service Port issue in YarnConfiguration

2018-08-01 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566342#comment-16566342
 ] 

Rohith Sharma K S commented on YARN-8612:
-

pending jenkins

> Fix NM Collector Service Port issue in YarnConfiguration
> 
>
> Key: YARN-8612
> URL: https://issues.apache.org/jira/browse/YARN-8612
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Prabha Manepalli
>Assignee: Prabha Manepalli
>Priority: Major
> Attachments: YARN-8612.v1.patch
>
>
> There is a typo in the existing YarnConfiguration which uses the 
> DEFAULT_NM_LOCALIZER_PORT as the default for NM Collector Service port. This 
> Jira aims to fix the typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8612) Fix NM Collector Service Port issue in YarnConfiguration

2018-08-01 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566341#comment-16566341
 ] 

Rohith Sharma K S commented on YARN-8612:
-

Good catch! +1 lgtm

> Fix NM Collector Service Port issue in YarnConfiguration
> 
>
> Key: YARN-8612
> URL: https://issues.apache.org/jira/browse/YARN-8612
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Prabha Manepalli
>Assignee: Prabha Manepalli
>Priority: Major
> Attachments: YARN-8612.v1.patch
>
>
> There is a typo in the existing YarnConfiguration which uses the 
> DEFAULT_NM_LOCALIZER_PORT as the default for NM Collector Service port. This 
> Jira aims to fix the typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8549) Adding a NoOp timeline writer and reader plugin classes for ATSv2

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566332#comment-16566332
 ] 

genericqa commented on YARN-8549:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
33s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-timelineservice in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-timelineservice in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 18s{color} 
| {color:red} hadoop-yarn-server-timelineservice in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-timelineservice in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  4m  
6s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-timelineservice in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-yarn-server-timelineservice in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8549 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12934007/YARN-8549.v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7c788049710d 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 735b492 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/21480/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/21480/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
 |
| javac | 

[jira] [Updated] (YARN-8612) Fix NM Collector Service Port issue in YarnConfiguration

2018-08-01 Thread Prabha Manepalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabha Manepalli updated YARN-8612:
---
Description: There is a typo in the existing YarnConfiguration which uses 
the DEFAULT_NM_LOCALIZER_PORT as the default for NM Collector Service port. 
This Jira aims to fix the typo.

> Fix NM Collector Service Port issue in YarnConfiguration
> 
>
> Key: YARN-8612
> URL: https://issues.apache.org/jira/browse/YARN-8612
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: ATSv2
>Reporter: Prabha Manepalli
>Assignee: Prabha Manepalli
>Priority: Major
> Attachments: YARN-8612.v1.patch
>
>
> There is a typo in the existing YarnConfiguration which uses the 
> DEFAULT_NM_LOCALIZER_PORT as the default for NM Collector Service port. This 
> Jira aims to fix the typo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8612) Fix NM Collector Service Port issue in YarnConfiguration

2018-08-01 Thread Prabha Manepalli (JIRA)
Prabha Manepalli created YARN-8612:
--

 Summary: Fix NM Collector Service Port issue in YarnConfiguration
 Key: YARN-8612
 URL: https://issues.apache.org/jira/browse/YARN-8612
 Project: Hadoop YARN
  Issue Type: Bug
  Components: ATSv2
Reporter: Prabha Manepalli
Assignee: Prabha Manepalli






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8549) Adding a NoOp timeline writer and reader plugin classes for ATSv2

2018-08-01 Thread Prabha Manepalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabha Manepalli updated YARN-8549:
---
Attachment: YARN-8549.v2.patch

> Adding a NoOp timeline writer and reader plugin classes for ATSv2
> -
>
> Key: YARN-8549
> URL: https://issues.apache.org/jira/browse/YARN-8549
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineclient, timelineserver
>Reporter: Prabha Manepalli
>Assignee: Prabha Manepalli
>Priority: Minor
> Attachments: YARN-8549.v1.patch, YARN-8549.v2.patch
>
>
> Stub implementation for TimeLineReader and TimeLineWriter classes. 
> These are useful for functional testing of writer and reader path for ATSv2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8549) Adding a NoOp timeline writer and reader plugin classes for ATSv2

2018-08-01 Thread Prabha Manepalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabha Manepalli updated YARN-8549:
---
Attachment: (was: TimeLineReaderAndWriterStubs.patch)

> Adding a NoOp timeline writer and reader plugin classes for ATSv2
> -
>
> Key: YARN-8549
> URL: https://issues.apache.org/jira/browse/YARN-8549
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineclient, timelineserver
>Reporter: Prabha Manepalli
>Assignee: Prabha Manepalli
>Priority: Minor
> Attachments: YARN-8549.v1.patch
>
>
> Stub implementation for TimeLineReader and TimeLineWriter classes. 
> These are useful for functional testing of writer and reader path for ATSv2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8549) Adding a NoOp timeline writer and reader plugin classes for ATSv2

2018-08-01 Thread Prabha Manepalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabha Manepalli updated YARN-8549:
---
Affects Version/s: (was: YARN-5335_branch2)
   (was: YARN-5355)
   (was: YARN-2928)

> Adding a NoOp timeline writer and reader plugin classes for ATSv2
> -
>
> Key: YARN-8549
> URL: https://issues.apache.org/jira/browse/YARN-8549
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineclient, timelineserver
>Reporter: Prabha Manepalli
>Assignee: Prabha Manepalli
>Priority: Minor
> Attachments: TimeLineReaderAndWriterStubs.patch, YARN-8549.v1.patch
>
>
> Stub implementation for TimeLineReader and TimeLineWriter classes. 
> These are useful for functional testing of writer and reader path for ATSv2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8549) Adding a NoOp timeline writer and reader plugin classes for ATSv2

2018-08-01 Thread Prabha Manepalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabha Manepalli updated YARN-8549:
---
Fix Version/s: (was: YARN-5355_branch2)
   (was: YARN-5355)
   (was: YARN-2928)

> Adding a NoOp timeline writer and reader plugin classes for ATSv2
> -
>
> Key: YARN-8549
> URL: https://issues.apache.org/jira/browse/YARN-8549
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineclient, timelineserver
>Reporter: Prabha Manepalli
>Assignee: Prabha Manepalli
>Priority: Minor
> Attachments: TimeLineReaderAndWriterStubs.patch, YARN-8549.v1.patch
>
>
> Stub implementation for TimeLineReader and TimeLineWriter classes. 
> These are useful for functional testing of writer and reader path for ATSv2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8549) Adding a NoOp timeline writer and reader plugin classes for ATSv2

2018-08-01 Thread Prabha Manepalli (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabha Manepalli updated YARN-8549:
---
Target Version/s:   (was: YARN-2928, YARN-5355, YARN-5355_branch2)

> Adding a NoOp timeline writer and reader plugin classes for ATSv2
> -
>
> Key: YARN-8549
> URL: https://issues.apache.org/jira/browse/YARN-8549
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2, timelineclient, timelineserver
>Reporter: Prabha Manepalli
>Assignee: Prabha Manepalli
>Priority: Minor
> Attachments: TimeLineReaderAndWriterStubs.patch, YARN-8549.v1.patch
>
>
> Stub implementation for TimeLineReader and TimeLineWriter classes. 
> These are useful for functional testing of writer and reader path for ATSv2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8593) Add RM web service endpoint to get user information

2018-08-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566302#comment-16566302
 ] 

Hudson commented on YARN-8593:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14691 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14691/])
YARN-8593. Add RM web service endpoint to get user information. (sunilg: rev 
735b4925569541fb8e65dc0c668ccc2aa2ffb30b)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/PassThroughRESTRequestInterceptor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/RouterWebServices.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ClusterUserInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServiceProtocol.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/DefaultRequestInterceptorREST.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/webapp/FederationInterceptorREST.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/MockRESTRequestInterceptor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServices.java


> Add RM web service endpoint to get user information
> ---
>
> Key: YARN-8593
> URL: https://issues.apache.org/jira/browse/YARN-8593
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8593.001.patch, YARN-8593.002.patch, 
> YARN-8593.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8593) Add RM web service endpoint to get user information

2018-08-01 Thread Sunil Govindan (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil Govindan updated YARN-8593:
-
Summary: Add RM web service endpoint to get user information  (was: Add new 
RM web service endpoint to get cluster user info)

> Add RM web service endpoint to get user information
> ---
>
> Key: YARN-8593
> URL: https://issues.apache.org/jira/browse/YARN-8593
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8593.001.patch, YARN-8593.002.patch, 
> YARN-8593.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8559) Expose mutable-conf scheduler's configuration in RM /scheduler-conf endpoint

2018-08-01 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566258#comment-16566258
 ] 

Jonathan Hung commented on YARN-8559:
-

+1, LGTM. Thanks for the patch [~cheersyang]!

> Expose mutable-conf scheduler's configuration in RM /scheduler-conf endpoint
> 
>
> Key: YARN-8559
> URL: https://issues.apache.org/jira/browse/YARN-8559
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Anna Savarin
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8559.001.patch, YARN-8559.002.patch, 
> YARN-8559.003.patch, YARN-8559.004.patch
>
>
> All Hadoop services provide a set of common endpoints (/stacks, /logLevel, 
> /metrics, /jmx, /conf).  In the case of the Resource Manager, part of the 
> configuration comes from the scheduler being used.  Currently, these 
> configuration key/values are not exposed through the /conf endpoint, thereby 
> revealing an incomplete configuration picture. 
> Make an improvement and expose the scheduling configuration info through the 
> RM's /conf endpoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8568) Replace the deprecated zk-address property in the HA config example in ResourceManagerHA.md

2018-08-01 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566218#comment-16566218
 ] 

Robert Kanter commented on YARN-8568:
-

{{hadoop.zk.address}} normally goes in core-site.xml.  Does it still work if 
specified in yarn-site.xml?  If not, we should add a note about that because 
as-is, it sounds like you should put it in yarn-site.xml with the other 
properties.

> Replace the deprecated zk-address property in the HA config example in 
> ResourceManagerHA.md
> ---
>
> Key: YARN-8568
> URL: https://issues.apache.org/jira/browse/YARN-8568
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.0.x
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Minor
> Attachments: YARN-8568.001.patch
>
>
> yarn.resourcemanager.zk-address is deprecated. Instead, use hadoop.zk.address
> In the example,  "yarn.resourcemanager.zk-address" is used which is 
> deprecated. In the description, the property name is correct 
> "hadoop.zk.address".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8610) Yarn Service Upgrade: Typo in Error message

2018-08-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566205#comment-16566205
 ] 

Hudson commented on YARN-8610:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14690 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14690/])
YARN-8610.  Fixed initiate upgrade error message. (eyang: rev 
23f394240e1568a38025e63e9dc0842e8c5235f7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/client/ServiceClient.java


> Yarn Service Upgrade: Typo in Error message
> ---
>
> Key: YARN-8610
> URL: https://issues.apache.org/jira/browse/YARN-8610
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8610.001.patch
>
>
> Upgrade can only be initiated when the service state = STABLE. 
> However the error message says the opposite:
> {code}
> 2018-08-01 21:48:44,965 ERROR client.ApiServiceClient: s is at STARTED state, 
> upgrade can not be invoked when service is STABLE.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8610) Yarn Service Upgrade: Typo in Error message

2018-08-01 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8610:

Affects Version/s: 3.1.1
   3.1.0

> Yarn Service Upgrade: Typo in Error message
> ---
>
> Key: YARN-8610
> URL: https://issues.apache.org/jira/browse/YARN-8610
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8610.001.patch
>
>
> Upgrade can only be initiated when the service state = STABLE. 
> However the error message says the opposite:
> {code}
> 2018-08-01 21:48:44,965 ERROR client.ApiServiceClient: s is at STARTED state, 
> upgrade can not be invoked when service is STABLE.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8607) Incorrect annotation in ApplicationAttemptStateData#getResourceSecondsMap

2018-08-01 Thread Yeliang Cang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566190#comment-16566190
 ] 

Yeliang Cang commented on YARN-8607:


No test case needed!

> Incorrect annotation in ApplicationAttemptStateData#getResourceSecondsMap
> -
>
> Key: YARN-8607
> URL: https://issues.apache.org/jira/browse/YARN-8607
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Attachments: YARN-8607.001.patch, YARN-8607.002.patch
>
>
> In ApplicationAttemptStateData.java
> the annotation of getResourceSecondsMap is not correct:
> {code}
> /**
>  * Get the aggregated number of resources preempted that the application has
>  * allocated times the number of seconds the application has been running.
>  *
>  * @return map containing the resource name and aggregated preempted
>  * resource-seconds
>  */
> @Public
> @Unstable
> public abstract Map getResourceSecondsMap();
> {code}
> Should be
> {code}
> /**
>  * Get the aggregated number of resources that the application has
>  * allocated times the number of seconds the application has been running.
>  *
>  * @return map containing the resource name and aggregated preempted
>  * resource-seconds
>  */
> @Public
> @Unstable
> public abstract Map getResourceSecondsMap();
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566144#comment-16566144
 ] 

Chandni Singh edited comment on YARN-8160 at 8/1/18 11:27 PM:
--

Thanks [~eyang]. This explanation is very helpful. Completely agree that we 
don't need to worry about logic to break docker image download into separate 
steps at this time. I will create a separate ticket for that and use this to 
just fix the bugs with \{{reInitializeContainer}} with docker container.


was (Author: csingh):
Thanks [~eyang]. This explanation is very helpful. Completely agree that we 
don't need to worry about logic to break docker image download into separate 
steps at this time. I will create a separate ticket for that and use this to 
just fix the bugs with \{{ reInitializeContainer}} with docker container.

> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> *Background*
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> NM performs the following steps during {{reInitializeContainer}}:
> - kills the existing process
> - cleans up the container
> - launches another container with the new {{ContainerLaunchContext}}
> NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
> upgrade the container.
> With {{reInitializeContainer}}, the following does *NOT* change
> - container ID. This is not created by NM. It is provided to it and here RM 
> is not creating another container allocation.
> - {{localizedResources}} this stays the same if the upgrade does *NOT* 
> require additional resources IIUC.
>  
> The following changes with {{reInitializeContainer}}
> - the working directory of the upgraded container changes. It is *NOT* a 
> relaunch. 
> *Changes required in the case of docker container*
> - {{reInitializeContainer}} seems to not be working with Docker containers. 
> Investigate and fix this.
> - [Future change] Add an additional api to NM to pull the images and modify 
> {{reInitializeContainer}} to trigger docker container launch without pulling 
> the image first which could be based on a flag.
> -- When the service upgrade is initialized, we can provide the user with 
> an option to just pull the images  on the NMs.
> -- When a component instance is upgrade, it calls the 
> {{reInitializeContainer}} with the flag pull-image set to false, since the NM 
> will have already pulled the images.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8298) Yarn Service Upgrade: Support fast component upgrades which accepts component spec

2018-08-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566147#comment-16566147
 ] 

Eric Yang commented on YARN-8298:
-

What the command might look like:

{code}
yarn app -upgrade abc /tmp/new.json
{code}

or

{code}
yarn app -upgrade abc -initiate /tmp/new.json
yarn app -upgrade abc
{code}

> Yarn Service Upgrade: Support fast component upgrades which accepts component 
> spec
> --
>
> Key: YARN-8298
> URL: https://issues.apache.org/jira/browse/YARN-8298
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> Currently service upgrade involves 2 steps
>  * initiate upgrade by providing new spec
>  * trigger upgrade of each instance/component
>  
> We need to add the ability to upgrade a component in shot which accepts the 
> spec of the component. However there are couple of limitations when upgrading 
> in this way:
>  # Aborting the upgrade will not be supported
>  # Upgrade finalization will be done automatically.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566144#comment-16566144
 ] 

Chandni Singh commented on YARN-8160:
-

Thanks [~eyang]. This explanation is very helpful. Completely agree that we 
don't need to worry about logic to break docker image download into separate 
steps at this time. I will create a separate ticket for that and use this to 
just fix the bugs with \{{ reInitializeContainer}} with docker container.

> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> *Background*
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> NM performs the following steps during {{reInitializeContainer}}:
> - kills the existing process
> - cleans up the container
> - launches another container with the new {{ContainerLaunchContext}}
> NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
> upgrade the container.
> With {{reInitializeContainer}}, the following does *NOT* change
> - container ID. This is not created by NM. It is provided to it and here RM 
> is not creating another container allocation.
> - {{localizedResources}} this stays the same if the upgrade does *NOT* 
> require additional resources IIUC.
>  
> The following changes with {{reInitializeContainer}}
> - the working directory of the upgraded container changes. It is *NOT* a 
> relaunch. 
> *Changes required in the case of docker container*
> - {{reInitializeContainer}} seems to not be working with Docker containers. 
> Investigate and fix this.
> - [Future change] Add an additional api to NM to pull the images and modify 
> {{reInitializeContainer}} to trigger docker container launch without pulling 
> the image first which could be based on a flag.
> -- When the service upgrade is initialized, we can provide the user with 
> an option to just pull the images  on the NMs.
> -- When a component instance is upgrade, it calls the 
> {{reInitializeContainer}} with the flag pull-image set to false, since the NM 
> will have already pulled the images.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8610) Yarn Service Upgrade: Typo in Error message

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566136#comment-16566136
 ] 

genericqa commented on YARN-8610:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
28s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8610 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933982/YARN-8610.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 3d1b26ada9f6 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f2e29ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21479/testReport/ |
| Max. process+thread count | 729 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21479/console |
| Powered by | 

[jira] [Commented] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566127#comment-16566127
 ] 

Eric Yang commented on YARN-8160:
-

What to expect when upgrading a docker container from YARN managed service:

User specified input JSON can have changes to the following:
- Version number
- Resource parameters can be changed, this includes the number of cpu and 
memory allocated.
- Environment variables
- Configuration templates
- Mounting locations

What YARN service might change:
- Location of the docker container launched.  YARN will try best effort to 
relaunch container on the same node manager.  If container failed to relaunch.  
The container might moved to a different node.
-  Docker may give out original IP address to another container while the 
upgrade is happening, the new instance might be started with a new IP address.
- Container Log.  If relaunch is successful, the log is appended to the same 
stdout.txt and stderr.txt.  However, if container restart on a different disk 
or different node.  The log content might be truncated to the current instance.

What YARN service will not change:
- Container ID
- Hostname of container
- Application name
- Name of all components

Data stored in node manager local directory is not guarantee to survive 
upgrade.  For stateful data, it is best to store in HDFS, or mounted location 
out side of node manager local directory.  Docker image should be designed to 
handle data conversion to simplify the upgrade process.

The current reInitializeContainer API can work for most of config changes.  As 
long as the regenerated .cmd file containers proper parameter format that can 
be used by container-executor.  Image download is handled when 
container-executor launches "docker run".  Therefore, we don't need to worry 
about logic to break docker image download into separate steps at this time.


> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> *Background*
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> NM performs the following steps during {{reInitializeContainer}}:
> - kills the existing process
> - cleans up the container
> - launches another container with the new {{ContainerLaunchContext}}
> NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
> upgrade the container.
> With {{reInitializeContainer}}, the following does *NOT* change
> - container ID. This is not created by NM. It is provided to it and here RM 
> is not creating another container allocation.
> - {{localizedResources}} this stays the same if the upgrade does *NOT* 
> require additional resources IIUC.
>  
> The following changes with {{reInitializeContainer}}
> - the working directory of the upgraded container changes. It is *NOT* a 
> relaunch. 
> *Changes required in the case of docker container*
> - {{reInitializeContainer}} seems to not be working with Docker containers. 
> Investigate and fix this.
> - [Future change] Add an additional api to NM to pull the images and modify 
> {{reInitializeContainer}} to trigger docker container launch without pulling 
> the image first which could be based on a flag.
> -- When the service upgrade is initialized, we can provide the user with 
> an option to just pull the images  on the NMs.
> -- When a component instance is upgrade, it calls the 
> {{reInitializeContainer}} with the flag pull-image set to false, since the NM 
> will have already pulled the images.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8200) Backport resource types/GPU features to branch-2

2018-08-01 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566119#comment-16566119
 ] 

Wangda Tan commented on YARN-8200:
--

[~jhung], thanks for sharing the result. Overall the number looks good.

> Backport resource types/GPU features to branch-2
> 
>
> Key: YARN-8200
> URL: https://issues.apache.org/jira/browse/YARN-8200
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: 
> counter.scheduler.operation.allocate.csv.defaultResources, 
> counter.scheduler.operation.allocate.csv.gpuResources, synth_sls.json
>
>
> Currently we have a need for GPU scheduling on our YARN clusters to support 
> deep learning workloads. However, our main production clusters are running 
> older versions of branch-2 (2.7 in our case). To prevent supporting too many 
> very different hadoop versions across multiple clusters, we would like to 
> backport the resource types/resource profiles feature to branch-2, as well as 
> the GPU specific support.
>  
> We have done a trial backport of YARN-3926 and some miscellaneous patches in 
> YARN-7069 based on issues we uncovered, and the backport was fairly smooth. 
> We also did a trial backport of most of YARN-6223 (sans docker support).
>  
> Regarding the backports, perhaps we can do the development in a feature 
> branch and then merge to branch-2 when ready.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566098#comment-16566098
 ] 

Eric Yang commented on YARN-8160:
-

The current upgrade per instance command is almost working.  There seems to be 
some bugs when I test the API.  First, I launch an service that looks like this:

{code}
{
  "name": "sleeper-service",
  "kerberos_principal" : {
"principal_name" : "hbase/_h...@example.com",
"keytab" : "file:///etc/security/keytabs/hbase.service.keytab"
  },
  "version": "1",
  "components" :
  [
{
  "name": "ping",
  "number_of_containers": 2,
  "artifact": {
"id": "hadoop/centos:6",
"type": "DOCKER"
  },
  "launch_command": "sleep,9000",
  "resource": {
"cpus": 1,
"memory": "256"
  },
  "configuration": {
"env": {
  "YARN_CONTAINER_RUNTIME_DOCKER_DELAYED_REMOVAL":"true",
  "YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE":"true"
},
"properties": {
  "docker.network": "host"
}
  }
}
  ]
}
{code}

After the application is launched, yarnfile is updated with a new docker image 
version, and launch command changed from sleep,9 to sleep,90.

{code}
{
  "name": "sleeper-service",
  "kerberos_principal" : {
"principal_name" : "hbase/_h...@example.com",
"keytab" : "file:///etc/security/keytabs/hbase.service.keytab"
  },
  "version": "2",
  "components" :
  [
{
  "name": "ping",
  "number_of_containers": 2,
  "artifact": {
"id": "hadoop/centos:latest",
"type": "DOCKER"
  },
  "launch_command": "sleep,90",
  "resource": {
"cpus": 1,
"memory": "256"
  },
  "configuration": {
"env": {
  "YARN_CONTAINER_RUNTIME_DOCKER_DELAYED_REMOVAL":"true",
  "YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE":"true"
},
"properties": {
  "docker.network": "host"
}
  }
}
  ]
}
{code}

Then proceeded with yarn app -upgrade sleeper -initiate yarnfile.v2, and yarn 
app -upgrade sleeper -instances ping-0,ping-1.
In the container log, it shows:

{code}
Docker run command: /usr/bin/docker run 
--name=container_e02_1533070786532_0006_01_02 --user=1013:1001 
--security-opt=no-new-privileges --net=host -v 
/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1533070786532_0006/container_e02_1533070786532_0006_01_02:/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1533070786532_0006/container_e02_1533070786532_0006_01_02:rw
 -v 
/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1533070786532_0006:/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1533070786532_0006:rw
 -v 
/tmp/hadoop-yarn/nm-local-dir/filecache:/tmp/hadoop-yarn/nm-local-dir/filecache:ro
 -v 
/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache:/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache:ro
 --cap-drop=ALL --cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP 
--cap-add=SETPCAP --cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE 
--cap-add=SETGID --cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID 
--cap-add=DAC_OVERRIDE --cap-add=KILL --cap-add=NET_BIND_SERVICE 
--hostname=ping-0.s1.hbase.ycluster --group-add 1001 --group-add 982 --env-file 
/tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1533070786532_0006/container_e02_1533070786532_0006_01_02/docker.container_e02_1533070786532_0006_01_026435836068142984694.env
 hadoop/centos:6 sleep 9 
Launching docker container...
Docker run command: /usr/bin/docker run 
--name=container_e02_1533070786532_0006_01_02 --user=1013:1001 
--security-opt=no-new-privileges --net=host -v 
/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1533070786532_0006/container_e02_1533070786532_0006_01_02:/usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs/application_1533070786532_0006/container_e02_1533070786532_0006_01_02:rw
 -v 
/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1533070786532_0006:/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1533070786532_0006:rw
 -v 
/tmp/hadoop-yarn/nm-local-dir/filecache:/tmp/hadoop-yarn/nm-local-dir/filecache:ro
 -v 
/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache:/tmp/hadoop-yarn/nm-local-dir/usercache/hbase/filecache:ro
 --cap-drop=ALL --cap-add=SYS_CHROOT --cap-add=MKNOD --cap-add=SETFCAP 
--cap-add=SETPCAP --cap-add=FSETID --cap-add=CHOWN --cap-add=AUDIT_WRITE 
--cap-add=SETGID --cap-add=NET_RAW --cap-add=FOWNER --cap-add=SETUID 
--cap-add=DAC_OVERRIDE --cap-add=KILL --cap-add=NET_BIND_SERVICE 
--hostname=ping-0.s1.hbase.ycluster --group-add 1001 --group-add 982 --env-file 
/tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1533070786532_0006/container_e02_1533070786532_0006_01_02/docker.container_e02_1533070786532_0006_01_02254751351532328192.env
 

[jira] [Updated] (YARN-8611) With restart policy set to ON_FAILURE, the service state sometimes doesn't reach STABLE state

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8611:

Summary: With restart policy set to ON_FAILURE, the service state sometimes 
doesn't reach STABLE state  (was: With restart policy set to ON_FAILURE, the 
service state doesn't reach STABLE state)

> With restart policy set to ON_FAILURE, the service state sometimes doesn't 
> reach STABLE state
> -
>
> Key: YARN-8611
> URL: https://issues.apache.org/jira/browse/YARN-8611
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Priority: Major
>
> - Launched a docker based sleeper service with {{restart_policy = 
> ON_FAILURE}}.
>  - There are container failures but eventually both the component instances 
> reach {{READY}} state
>  - However the SERVICE state remains {{STARTED}}
> Below is the service status json:
> {code:java}
>     "components": [
>         {
>             "artifact": {
>                 "id": "hadoop/centos:6",
>                 "type": "DOCKER"
>             },
>             "configuration": {
>                 "env": {
>                     "YARN_CONTAINER_RUNTIME_DOCKER_DELAYED_REMOVAL": "true",
>                     "YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE": 
> "true"
>                 },
>                 "files": [],
>                 "properties": {
>                     "docker.network": "host"
>                 }
>             },
>             "containers": [
>                 {
>                     "bare_host": “{host1}“,
>                     "component_instance_name": "ping-1",
>                     "hostname": "ping-1.s.hbase.ycluster",
>                     "id": "container_e02_1533070786532_0005_01_03",
>                     "ip": "172.26.111.21",
>                     "launch_time": 1533159861113,
>                     "state": "READY"
>                 },
>                 {
>                     "bare_host": “{host2}“,
>                     "component_instance_name": "ping-0",
>                     "hostname": "ping-0.s.hbase.ycluster",
>                     "id": "container_e02_1533070786532_0005_01_07",
>                     "ip": "172.26.111.21",
>                     "launch_time": 1533160113627,
>                     "state": "READY"
>                 }
>             ],
>             "dependencies": [],
>             "launch_command": "sleep 9",
>             "name": "ping",
>             "number_of_containers": 2,
>             "quicklinks": [],
>             "resource": {
>                 "additional": {},
>                 "cpus": 1,
>                 "memory": "256"
>             },
>             "restart_policy": "ON_FAILURE",
>             "run_privileged_container": false,
>             "state": "STABLE"
>         }
>     ],
>     "configuration": {
>         "env": {},
>         "files": [],
>         "properties": {}
>     },
>     "id": "application_1533070786532_0005",
>     "kerberos_principal": {
>         "keytab": "...",
>         "principal_name": "..."
>     },
>     "lifetime": -1,
>     "name": "s",
>     "quicklinks": {},
>     "state": "STARTED",
>     "version": "1"
> }{code}
> The service state needs to become {{STABLE}} since all the component 
> instances are {{READY}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8611) With restart policy set to ON_FAILURE, the service state doesn't reach STABLE state

2018-08-01 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-8611:
---

 Summary: With restart policy set to ON_FAILURE, the service state 
doesn't reach STABLE state
 Key: YARN-8611
 URL: https://issues.apache.org/jira/browse/YARN-8611
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chandni Singh


- Launched a docker based sleeper service with {{restart_policy = ON_FAILURE}}.
 - There are container failures but eventually both the component instances 
reach {{READY}} state
 - However the SERVICE state remains {{STARTED}}

Below is the service status json:
{code:java}
    "components": [
        {
            "artifact": {
                "id": "hadoop/centos:6",
                "type": "DOCKER"
            },
            "configuration": {
                "env": {
                    "YARN_CONTAINER_RUNTIME_DOCKER_DELAYED_REMOVAL": "true",
                    "YARN_CONTAINER_RUNTIME_DOCKER_RUN_OVERRIDE_DISABLE": "true"
                },
                "files": [],
                "properties": {
                    "docker.network": "host"
                }
            },
            "containers": [
                {
                    "bare_host": “{host1}“,
                    "component_instance_name": "ping-1",
                    "hostname": "ping-1.s.hbase.ycluster",
                    "id": "container_e02_1533070786532_0005_01_03",
                    "ip": "172.26.111.21",
                    "launch_time": 1533159861113,
                    "state": "READY"
                },
                {
                    "bare_host": “{host2}“,
                    "component_instance_name": "ping-0",
                    "hostname": "ping-0.s.hbase.ycluster",
                    "id": "container_e02_1533070786532_0005_01_07",
                    "ip": "172.26.111.21",
                    "launch_time": 1533160113627,
                    "state": "READY"
                }
            ],
            "dependencies": [],
            "launch_command": "sleep 9",
            "name": "ping",
            "number_of_containers": 2,
            "quicklinks": [],
            "resource": {
                "additional": {},
                "cpus": 1,
                "memory": "256"
            },
            "restart_policy": "ON_FAILURE",
            "run_privileged_container": false,
            "state": "STABLE"
        }
    ],
    "configuration": {
        "env": {},
        "files": [],
        "properties": {}
    },
    "id": "application_1533070786532_0005",
    "kerberos_principal": {
        "keytab": "...",
        "principal_name": "..."
    },
    "lifetime": -1,
    "name": "s",
    "quicklinks": {},
    "state": "STARTED",
    "version": "1"
}{code}

The service state needs to become {{STABLE}} since all the component instances 
are {{READY}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8610) Yarn Service Upgrade: Typo in Error message

2018-08-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566061#comment-16566061
 ] 

Eric Yang commented on YARN-8610:
-

+1 on the message change.

> Yarn Service Upgrade: Typo in Error message
> ---
>
> Key: YARN-8610
> URL: https://issues.apache.org/jira/browse/YARN-8610
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8610.001.patch
>
>
> Upgrade can only be initiated when the service state = STABLE. 
> However the error message says the opposite:
> {code}
> 2018-08-01 21:48:44,965 ERROR client.ApiServiceClient: s is at STARTED state, 
> upgrade can not be invoked when service is STABLE.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8610) Yarn Service Upgrade: Typo in Error message

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8610:

Attachment: YARN-8610.001.patch

> Yarn Service Upgrade: Typo in Error message
> ---
>
> Key: YARN-8610
> URL: https://issues.apache.org/jira/browse/YARN-8610
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8610.001.patch
>
>
> Upgrade can only be initiated when the service state = STABLE. 
> However the error message says the opposite:
> {code}
> 2018-08-01 21:48:44,965 ERROR client.ApiServiceClient: s is at STARTED state, 
> upgrade can not be invoked when service is STABLE.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8610) Yarn Service Upgrade: Typo in Error message

2018-08-01 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-8610:
---

 Summary: Yarn Service Upgrade: Typo in Error message
 Key: YARN-8610
 URL: https://issues.apache.org/jira/browse/YARN-8610
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Chandni Singh
Assignee: Chandni Singh


Upgrade can only be initiated when the service state = STABLE. 
However the error message says the opposite:
{code}
2018-08-01 21:48:44,965 ERROR client.ApiServiceClient: s is at STARTED state, 
upgrade can not be invoked when service is STABLE.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8559) Expose mutable-conf scheduler's configuration in RM /scheduler-conf endpoint

2018-08-01 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566045#comment-16566045
 ] 

Wangda Tan edited comment on YARN-8559 at 8/1/18 9:57 PM:
--

Thanks [~cheersyang], latest patch LGTM. 

[~jhung], given this is related to dynamic queue config feature, could u also 
take a look at this? 

+ [~sunil.gov...@gmail.com]


was (Author: leftnoteasy):
Thanks [~cheersyang], latest patch LGTM. 

[~jhung], given this is related to dynamic queue config feature, could u also 
take a look at this? 

> Expose mutable-conf scheduler's configuration in RM /scheduler-conf endpoint
> 
>
> Key: YARN-8559
> URL: https://issues.apache.org/jira/browse/YARN-8559
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Anna Savarin
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8559.001.patch, YARN-8559.002.patch, 
> YARN-8559.003.patch, YARN-8559.004.patch
>
>
> All Hadoop services provide a set of common endpoints (/stacks, /logLevel, 
> /metrics, /jmx, /conf).  In the case of the Resource Manager, part of the 
> configuration comes from the scheduler being used.  Currently, these 
> configuration key/values are not exposed through the /conf endpoint, thereby 
> revealing an incomplete configuration picture. 
> Make an improvement and expose the scheduling configuration info through the 
> RM's /conf endpoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8559) Expose mutable-conf scheduler's configuration in RM /scheduler-conf endpoint

2018-08-01 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566045#comment-16566045
 ] 

Wangda Tan commented on YARN-8559:
--

Thanks [~cheersyang], latest patch LGTM. 

[~jhung], given this is related to dynamic queue config feature, could u also 
take a look at this? 

> Expose mutable-conf scheduler's configuration in RM /scheduler-conf endpoint
> 
>
> Key: YARN-8559
> URL: https://issues.apache.org/jira/browse/YARN-8559
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Anna Savarin
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8559.001.patch, YARN-8559.002.patch, 
> YARN-8559.003.patch, YARN-8559.004.patch
>
>
> All Hadoop services provide a set of common endpoints (/stacks, /logLevel, 
> /metrics, /jmx, /conf).  In the case of the Resource Manager, part of the 
> configuration comes from the scheduler being used.  Currently, these 
> configuration key/values are not exposed through the /conf endpoint, thereby 
> revealing an incomplete configuration picture. 
> Make an improvement and expose the scheduling configuration info through the 
> RM's /conf endpoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4946) RM should not consider an application as COMPLETED when log aggregation is not in a terminal state

2018-08-01 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566033#comment-16566033
 ] 

Chandni Singh commented on YARN-4946:
-

[~snemeth] [~rkanter] I have a question about this issue.
{quote} When the RM "forgets" about an older completed Application (e.g. RM 
failover, enough time has passed, etc), the tool won't find the Application in 
the RM and will just assume that its log aggregation succeeded, even if it 
actually failed or is still running.
{quote}
Is the {{log aggregation status}} of an application available only from RM. Is 
this not available from application-history-service or 
application-timeline-service?


> RM should not consider an application as COMPLETED when log aggregation is 
> not in a terminal state
> --
>
> Key: YARN-4946
> URL: https://issues.apache.org/jira/browse/YARN-4946
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-4946.001.patch
>
>
> MAPREDUCE-6415 added a tool that combines the aggregated log files for each 
> Yarn App into a HAR file.  When run, it seeds the list by looking at the 
> aggregated logs directory, and then filters out ineligible apps.  One of the 
> criteria involves checking with the RM that an Application's log aggregation 
> status is not still running and has not failed.  When the RM "forgets" about 
> an older completed Application (e.g. RM failover, enough time has passed, 
> etc), the tool won't find the Application in the RM and will just assume that 
> its log aggregation succeeded, even if it actually failed or is still running.
> We can solve this problem by doing the following:
> The RM should not consider an app to be fully completed (and thus removed 
> from its history) until the aggregation status has reached a terminal state 
> (e.g. SUCCEEDED, FAILED, TIME_OUT).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8263) DockerClient still touches hadoop.tmp.dir

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566002#comment-16566002
 ] 

genericqa commented on YARN-8263:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
28s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8263 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933970/YARN-8263.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 419982c9d1fd 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f2e29ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21476/testReport/ |
| Max. process+thread count | 440 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21476/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DockerClient still touches hadoop.tmp.dir
> 

[jira] [Commented] (YARN-8509) Fix UserLimit calculation for preemption to balance scenario after queue satisfied

2018-08-01 Thread Chandni Singh (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16566001#comment-16566001
 ] 

Chandni Singh commented on YARN-8509:
-

LGTM

> Fix UserLimit calculation for preemption to balance scenario after queue 
> satisfied  
> 
>
> Key: YARN-8509
> URL: https://issues.apache.org/jira/browse/YARN-8509
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Zian Chen
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-8509.001.patch, YARN-8509.002.patch, 
> YARN-8509.003.patch
>
>
> In LeafQueue#getTotalPendingResourcesConsideringUserLimit, we calculate total 
> pending resource based on user-limit percent and user-limit factor which will 
> cap pending resource for each user to the minimum of user-limit pending and 
> actual pending. This will prevent queue from taking more pending resource to 
> achieve queue balance after all queue satisfied with its ideal allocation.
>   
>  We need to change the logic to let queue pending can go beyond userlimit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7089) Mark the log-aggregation-controller APIs as public

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565998#comment-16565998
 ] 

genericqa commented on YARN-7089:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 15s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
22s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-7089 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933968/YARN-7089.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 059f29843069 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 
17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f2e29ac |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21477/testReport/ |
| Max. process+thread count | 291 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21477/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Mark the 

[jira] [Updated] (YARN-8588) Logging improvements for better debuggability

2018-08-01 Thread Suma Shivaprasad (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8588:
---
Attachment: YARN-8588.2.patch

> Logging improvements for better debuggability
> -
>
> Key: YARN-8588
> URL: https://issues.apache.org/jira/browse/YARN-8588
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8588.1.patch, YARN-8588.2.patch
>
>
> Capacity allocations decided in GuaranteedCapacityOvertimePolicy are 
> available via AutoCreatedLeafQueueConfig. However this class lacks a toString 
> and some other DEBUG level logs are needed for better debuggability



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8588) Logging improvements for better debuggability

2018-08-01 Thread Wangda Tan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565926#comment-16565926
 ] 

Wangda Tan commented on YARN-8588:
--

[~suma.shivaprasad], could you help to take care of the findbugs warning? Apart 
from that, patch LGTM.

> Logging improvements for better debuggability
> -
>
> Key: YARN-8588
> URL: https://issues.apache.org/jira/browse/YARN-8588
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
>Priority: Major
> Attachments: YARN-8588.1.patch
>
>
> Capacity allocations decided in GuaranteedCapacityOvertimePolicy are 
> available via AutoCreatedLeafQueueConfig. However this class lacks a toString 
> and some other DEBUG level logs are needed for better debuggability



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7833) [PERF/TEST] Extend SLS to support simulation of a Federated Environment

2018-08-01 Thread Tanuj Nayak (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565919#comment-16565919
 ] 

Tanuj Nayak commented on YARN-7833:
---

Can someone take a look at this [~curino] [~giovanni.fumarola] [~subru]

> [PERF/TEST] Extend SLS to support simulation of a Federated Environment
> ---
>
> Key: YARN-7833
> URL: https://issues.apache.org/jira/browse/YARN-7833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Carlo Curino
>Assignee: Tanuj Nayak
>Priority: Major
> Attachments: YARN-7833.v1.patch, YARN-7833.v2.patch, 
> YARN-7833.v3.patch, YARN-7833.v4.patch, YARN-7833.v5.patch, 
> YARN-7833.v6.patch, YARN-7833.v7.patch
>
>
> To develop algorithms for federation, it would be of great help to have a 
> version of SLS that supports multi RMs and GPG.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7089) Mark the log-aggregation-controller APIs as public

2018-08-01 Thread Zian Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen reassigned YARN-7089:
---

Assignee: Zian Chen  (was: Xuan Gong)

> Mark the log-aggregation-controller APIs as public
> --
>
> Key: YARN-7089
> URL: https://issues.apache.org/jira/browse/YARN-7089
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Zian Chen
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7089) Mark the log-aggregation-controller APIs as public

2018-08-01 Thread Zian Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565895#comment-16565895
 ] 

Zian Chen commented on YARN-7089:
-

[~djp] [~xgong], [~rkanter] could you help review the patch? Thanks

 

> Mark the log-aggregation-controller APIs as public
> --
>
> Key: YARN-7089
> URL: https://issues.apache.org/jira/browse/YARN-7089
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7089.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7089) Mark the log-aggregation-controller APIs as public

2018-08-01 Thread Zian Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen updated YARN-7089:

Attachment: YARN-7089.001.patch

> Mark the log-aggregation-controller APIs as public
> --
>
> Key: YARN-7089
> URL: https://issues.apache.org/jira/browse/YARN-7089
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Zian Chen
>Priority: Major
> Attachments: YARN-7089.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8160:

Description: 
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637

*Background*
Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
NM performs the following steps during {{reInitializeContainer}}:
- kills the existing process
- cleans up the container
- launches another container with the new {{ContainerLaunchContext}}

NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
upgrade the container.

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResources}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes. It is *NOT* a 
relaunch. 

*Changes required in the case of docker container*
- {{reInitializeContainer}} seems to not be working with Docker containers. 
Investigate and fix this.
- [Future change] Add an additional api to NM to pull the images and modify 
{{reInitializeContainer}} to trigger docker container launch without pulling 
the image first which could be based on a flag.
-- When the service upgrade is initialized, we can provide the user with an 
option to just pull the images  on the NMs.
-- When a component instance is upgrade, it calls the 
{{reInitializeContainer}} with the flag pull-image set to false, since the NM 
will have already pulled the images.

  was:
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637

*Background*
Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
NM performs the following steps during {{reInitializeContainer}}:
- kills the existing process
- cleans up the container
- launches another container with the new {{ContainerLaunchContext}}

NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
upgrade the container.

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResources}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes. It is *NOT* a 
relaunch. 

*Changes required in the case of docker container*
- {{reInitializeContainer}} seems to not be working with Docker containers. 
Investigate and fix this.
- [Future change] Add an additional api to NM to pull the images and modify 
{{reInitializeContainer}} to trigger docker container launch without pulling 
the image first which could be based on a flag.
-- When the service upgrade is initialized, the ServiceMaster can trigger 
the NMs to pull the image. 
-- When a component instance is upgrade, it calls the 
{{reInitializeContainer}} with the flag pull-image set to false, since the NM 
will have already pulled the images.


> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> *Background*
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> NM performs the following steps during {{reInitializeContainer}}:
> - kills the existing process
> - cleans up the container
> - launches another container with the new {{ContainerLaunchContext}}
> NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
> upgrade the container.
> With {{reInitializeContainer}}, the following does *NOT* change
> - container ID. This is not created by NM. It is provided to it and here RM 
> is not creating another container allocation.
> - {{localizedResources}} this stays the same if the upgrade does *NOT* 
> require additional resources IIUC.
>  
> The following changes with {{reInitializeContainer}}
> - the working directory of the upgraded container changes. It is *NOT* a 
> relaunch. 
> *Changes required in the case of docker container*
> - 

[jira] [Updated] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8160:

Description: 
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637

*Background*
Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
NM performs the following steps during {{reInitializeContainer}}:
- kills the existing process
- cleans up the container
- launches another container with the new {{ContainerLaunchContext}}

NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
upgrade the container.

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResources}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes. It is *NOT* a 
relaunch. 

*Changes required in the case of docker container*
- {{reInitializeContainer}} seems to not be working with Docker containers. 
Investigate and fix this.
- [Future change] Add an additional api to NM to pull the images and modify 
{{reInitializeContainer}} to trigger docker container launch without pulling 
the image first which could be based on a flag.
-- When the service upgrade is initialized, the ServiceMaster can trigger 
the NMs to pull the image. 
-- When a component instance is upgrade, it calls the 
{{reInitializeContainer}} with the flag pull-image set to false, since the NM 
will have already pulled the images.

  was:
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637

*Background*
Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
NM performs the following steps during {{reInitializeContainer}}:
- kills the existing process
- cleans up the container
- launches another container with the new {{ContainerLaunchContext}}

NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
upgrade the container.

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResources}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes. It is *NOT* a 
relaunch. 

*Changes required in the case of docker container*
- {{reInitializeContainer}} seems to not be working with Docker containers. 
Investigate and fix this.
- [Future change] Add an additional api to NM to pull the images and modify 
{{reInitializeContainer}} to trigger docker container launch without pulling 
the image first which could be based on a flag.
-- When the service upgrade is initialized, the ServiceMaster can trigger 
the NMs to pull the image
-- When a component instance is upgrade, it calls the 
{{reInitializeContainer}} with the flag pull-image set to false, since the NM 
will have already pulled the images.


> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> *Background*
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> NM performs the following steps during {{reInitializeContainer}}:
> - kills the existing process
> - cleans up the container
> - launches another container with the new {{ContainerLaunchContext}}
> NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
> upgrade the container.
> With {{reInitializeContainer}}, the following does *NOT* change
> - container ID. This is not created by NM. It is provided to it and here RM 
> is not creating another container allocation.
> - {{localizedResources}} this stays the same if the upgrade does *NOT* 
> require additional resources IIUC.
>  
> The following changes with {{reInitializeContainer}}
> - the working directory of the upgraded container changes. It is *NOT* a 
> relaunch. 
> *Changes required in the case of docker container*
> - {{reInitializeContainer}} 

[jira] [Updated] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8160:

Description: 
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637

*Background*
Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
NM performs the following steps during {{reInitializeContainer}}:
- kills the existing process
- cleans up the container
- launches another container with the new {{ContainerLaunchContext}}

NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
upgrade the container.

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResources}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes. It is *NOT* a 
relaunch. 

*Changes required in the case of docker container*
- {{reInitializeContainer}} seems to not be working with Docker containers. 
Investigate and fix this.
- [Future change] Add an additional api to NM to pull the images and modify 
{{reInitializeContainer}} to trigger docker container launch without pulling 
the image first which could be based on a flag.
-- When the service upgrade is initialized, the ServiceMaster can trigger 
the NMs to pull the image
-- When a component instance is upgrade, it calls the 
{{reInitializeContainer}} with the flag pull-image set to false, since the NM 
will have already pulled the images.

  was:
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637

*Background*
Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
NM performs the following steps during {{reInitializeContainer}}:
- kills the existing process
- cleans up the container
- launches another container with the new {{ContainerLaunchContext}}

NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
upgrade the container.

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResources}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes. It is *NOT* a 
relaunch. 

*Changes required for docker container*
- {{reInitializeContainer}} seems to not be working with Docker containers. 
Investigate and fix this.
- [Future change] Add an additional api to NM to pull the images and modify 
{{reInitializeContainer}} to trigger docker container launch without pulling 
the image first which could be based on a flag.
-- When the service upgrade is initialized, the ServiceMaster can trigger 
the NMs to pull the image
-- When a component instance is upgrade, it calls the 
{{reInitializeContainer}} with the flag pull-image set to false, since the NM 
will have already pulled the images.


> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> *Background*
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> NM performs the following steps during {{reInitializeContainer}}:
> - kills the existing process
> - cleans up the container
> - launches another container with the new {{ContainerLaunchContext}}
> NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
> upgrade the container.
> With {{reInitializeContainer}}, the following does *NOT* change
> - container ID. This is not created by NM. It is provided to it and here RM 
> is not creating another container allocation.
> - {{localizedResources}} this stays the same if the upgrade does *NOT* 
> require additional resources IIUC.
>  
> The following changes with {{reInitializeContainer}}
> - the working directory of the upgraded container changes. It is *NOT* a 
> relaunch. 
> *Changes required in the case of docker container*
> - {{reInitializeContainer}} seems to not 

[jira] [Updated] (YARN-8136) Add version attribute to site doc examples and quickstart

2018-08-01 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8136:

Attachment: YARN-8136.001.patch

> Add version attribute to site doc examples and quickstart
> -
>
> Key: YARN-8136
> URL: https://issues.apache.org/jira/browse/YARN-8136
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: site
>Reporter: Gour Saha
>Priority: Major
> Attachments: YARN-8136.001.patch
>
>
> version attribute is missing in the following 2 site doc files -
> src/site/markdown/yarn-service/Examples.md
> src/site/markdown/yarn-service/QuickStart.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8160:

Description: 
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637

*Background*
Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
NM performs the following steps during {{reInitializeContainer}}:
- kills the existing process
- cleans up the container
- launches another container with the new {{ContainerLaunchContext}}

NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
upgrade the container.

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResources}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes. It is *NOT* a 
relaunch. 

*Changes required for docker container*
- {{reInitializeContainer}} seems to not be working with Docker containers. 
Investigate and fix this.
- [Future change] Add an additional api to NM to pull the images and modify 
{{reInitializeContainer}} to trigger docker container launch without pulling 
the image first which could be based on a flag.
-- When the service upgrade is initialized, the ServiceMaster can trigger 
the NMs to pull the image
-- When a component instance is upgrade, it calls the 
{{reInitializeContainer}} with the flag pull-image set to false, since the NM 
will have already pulled the images.

  was:
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637

*Background*
Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
NM performs the following steps during {{reInitializeContainer}}:
- kills the existing process
- cleans up the container
- launches another container with the new {{ContainerLaunchContext}}

NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
upgrade the container.

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResources}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes. It is *NOT* a 
relaunch. 




> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> *Background*
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> NM performs the following steps during {{reInitializeContainer}}:
> - kills the existing process
> - cleans up the container
> - launches another container with the new {{ContainerLaunchContext}}
> NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
> upgrade the container.
> With {{reInitializeContainer}}, the following does *NOT* change
> - container ID. This is not created by NM. It is provided to it and here RM 
> is not creating another container allocation.
> - {{localizedResources}} this stays the same if the upgrade does *NOT* 
> require additional resources IIUC.
>  
> The following changes with {{reInitializeContainer}}
> - the working directory of the upgraded container changes. It is *NOT* a 
> relaunch. 
> *Changes required for docker container*
> - {{reInitializeContainer}} seems to not be working with Docker containers. 
> Investigate and fix this.
> - [Future change] Add an additional api to NM to pull the images and modify 
> {{reInitializeContainer}} to trigger docker container launch without pulling 
> the image first which could be based on a flag.
> -- When the service upgrade is initialized, the ServiceMaster can trigger 
> the NMs to pull the image
> -- When a component instance is upgrade, it calls the 
> {{reInitializeContainer}} with the flag pull-image set to false, since the NM 
> will have already pulled the images.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (YARN-8600) RegistryDNS hang when remote lookup does not reply

2018-08-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565807#comment-16565807
 ] 

Hudson commented on YARN-8600:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14688 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14688/])
YARN-8600. RegistryDNS hang when remote lookup does not reply. (skumpf: rev 
603a57476ce0bf9514f0432a235f29432ca4c323)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/dns/LookupTask.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/test/java/org/apache/hadoop/registry/server/dns/TestRegistryDNS.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/src/main/java/org/apache/hadoop/registry/server/dns/RegistryDNS.java


> RegistryDNS hang when remote lookup does not reply
> --
>
> Key: YARN-8600
> URL: https://issues.apache.org/jira/browse/YARN-8600
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8600.001.patch, YARN-8600.002.patch, 
> YARN-8600.003.patch
>
>
> If lookup type mismatch with the record to query, remote DNS server might not 
> reply.  For example looking up a CNAME record with a PTR address: 
> 1.76.27.172.in-addr.arpa.  This can hang registryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8160:

Description: 
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637

*Background*
Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
NM performs the following steps during {{reInitializeContainer}}:
- kills the existing process
- cleans up the container
- launches another container with the new {{ContainerLaunchContext}}

NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
upgrade the container.

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResources}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes. It is *NOT* a 
relaunch. 



  was:
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637


Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
{{reInitializeContainer}} of the container on the NM performs these steps:
- kill the existing process
- cleanup the container
- launch another container with the new {{ContainerLaunchContext}}

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResourcess}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes.




> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> *Background*
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> NM performs the following steps during {{reInitializeContainer}}:
> - kills the existing process
> - cleans up the container
> - launches another container with the new {{ContainerLaunchContext}}
> NOTE: {{ContainerLaunchContext}} holds all the information that needs to 
> upgrade the container.
> With {{reInitializeContainer}}, the following does *NOT* change
> - container ID. This is not created by NM. It is provided to it and here RM 
> is not creating another container allocation.
> - {{localizedResources}} this stays the same if the upgrade does *NOT* 
> require additional resources IIUC.
>  
> The following changes with {{reInitializeContainer}}
> - the working directory of the upgraded container changes. It is *NOT* a 
> relaunch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8600) RegistryDNS hang when remote lookup does not reply

2018-08-01 Thread Shane Kumpf (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565792#comment-16565792
 ] 

Shane Kumpf commented on YARN-8600:
---

Thanks for the updated patch [~eyang]! +1 on the 003 patch. I've committed this 
to trunk, branch-3.1.1, and branch-3.1.

> RegistryDNS hang when remote lookup does not reply
> --
>
> Key: YARN-8600
> URL: https://issues.apache.org/jira/browse/YARN-8600
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8600.001.patch, YARN-8600.002.patch, 
> YARN-8600.003.patch
>
>
> If lookup type mismatch with the record to query, remote DNS server might not 
> reply.  For example looking up a CNAME record with a PTR address: 
> 1.76.27.172.in-addr.arpa.  This can hang registryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8600) RegistryDNS hang when remote lookup does not reply

2018-08-01 Thread Shane Kumpf (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-8600:
--
Fix Version/s: 3.1.1
   3.2.0

> RegistryDNS hang when remote lookup does not reply
> --
>
> Key: YARN-8600
> URL: https://issues.apache.org/jira/browse/YARN-8600
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Critical
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8600.001.patch, YARN-8600.002.patch, 
> YARN-8600.003.patch
>
>
> If lookup type mismatch with the record to query, remote DNS server might not 
> reply.  For example looking up a CNAME record with a PTR address: 
> 1.76.27.172.in-addr.arpa.  This can hang registryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8160:

Description: 
Ability to upgrade dockerized  yarn native services.
Ref: YARN-5637


Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
{{reInitializeContainer}} of the container on the NM performs these steps:
- kill the existing process
- cleanup the container
- launch another container with the new {{ContainerLaunchContext}}

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResourcess}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes.



  was:
Ability to upgrade dockerized  yarn native services.

Ref: YARN-5637

Ref: YARN-5637
Ability to upgrade dockerized  yarn native services.

Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
{{reInitializeContainer}} of the container on the NM performs these steps:
- kill the existing process
- cleanup the container
- launch another container with the new {{ContainerLaunchContext}}

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResourcess}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes.




> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> {{reInitializeContainer}} of the container on the NM performs these steps:
> - kill the existing process
> - cleanup the container
> - launch another container with the new {{ContainerLaunchContext}}
> With {{reInitializeContainer}}, the following does *NOT* change
> - container ID. This is not created by NM. It is provided to it and here RM 
> is not creating another container allocation.
> - {{localizedResourcess}} this stays the same if the upgrade does *NOT* 
> require additional resources IIUC.
>  
> The following changes with {{reInitializeContainer}}
> - the working directory of the upgraded container changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8160:

Description: 
Ability to upgrade dockerized  yarn native services.

Ref: YARN-5637

Ref: YARN-5637
Ability to upgrade dockerized  yarn native services.

Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
{{reInitializeContainer}} of the container on the NM performs these steps:
- kill the existing process
- cleanup the container
- launch another container with the new {{ContainerLaunchContext}}

With {{reInitializeContainer}}, the following does *NOT* change
- container ID. This is not created by NM. It is provided to it and here RM is 
not creating another container allocation.
- {{localizedResourcess}} this stays the same if the upgrade does *NOT* require 
additional resources IIUC.
 
The following changes with {{reInitializeContainer}}
- the working directory of the upgraded container changes.



  was:
Ability to upgrade dockerized  yarn native services.

Ref: YARN-5637

Ref: YARN-5637
Ability to upgrade dockerized  yarn native services.

Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
{{reInitializeContainer}} of the container on the NM performs these steps:
- kill the existing process
- cleanup the container
- launch another container with the new {{ContainerLaunchContext}}

The Container Id does NOT change because NM doesn't create the ContainerId. It 
is provided to it by the AM and here the RM is not involved in creating a new 
container allocation.

An importan



> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> Ref: YARN-5637
> Ability to upgrade dockerized  yarn native services.
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> {{reInitializeContainer}} of the container on the NM performs these steps:
> - kill the existing process
> - cleanup the container
> - launch another container with the new {{ContainerLaunchContext}}
> With {{reInitializeContainer}}, the following does *NOT* change
> - container ID. This is not created by NM. It is provided to it and here RM 
> is not creating another container allocation.
> - {{localizedResourcess}} this stays the same if the upgrade does *NOT* 
> require additional resources IIUC.
>  
> The following changes with {{reInitializeContainer}}
> - the working directory of the upgraded container changes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8559) Expose mutable-conf scheduler's configuration in RM /scheduler-conf endpoint

2018-08-01 Thread Suma Shivaprasad (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565771#comment-16565771
 ] 

Suma Shivaprasad commented on YARN-8559:


Thanks for updating the patch [~cheersyang] Patch 004 LGTM. +1

> Expose mutable-conf scheduler's configuration in RM /scheduler-conf endpoint
> 
>
> Key: YARN-8559
> URL: https://issues.apache.org/jira/browse/YARN-8559
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: Anna Savarin
>Assignee: Weiwei Yang
>Priority: Major
> Attachments: YARN-8559.001.patch, YARN-8559.002.patch, 
> YARN-8559.003.patch, YARN-8559.004.patch
>
>
> All Hadoop services provide a set of common endpoints (/stacks, /logLevel, 
> /metrics, /jmx, /conf).  In the case of the Resource Manager, part of the 
> configuration comes from the scheduler being used.  Currently, these 
> configuration key/values are not exposed through the /conf endpoint, thereby 
> revealing an incomplete configuration picture. 
> Make an improvement and expose the scheduling configuration info through the 
> RM's /conf endpoint.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8160:

Description: 
Ability to upgrade dockerized  yarn native services.

Ref: YARN-5637

Ref: YARN-5637
Ability to upgrade dockerized  yarn native services.

Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
{{reInitializeContainer}} of the container on the NM performs these steps:
- kill the existing process
- cleanup the container
- launch another container with the new {{ContainerLaunchContext}}

The Container Id does NOT change because NM doesn't create the ContainerId. It 
is provided to it by the AM and here the RM is not involved in creating a new 
container allocation.

An importan


  was:
Ability to upgrade dockerized  yarn native services.

Ref: YARN-5637

Ref: YARN-5637
Ability to upgrade dockerized  yarn native services.

Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
{{reInitializeContainer}} of the container on the NM performs these steps:
- kill the existing process
- cleanup the container
- launch another container with the new {{ContainerLaunchContext}}

The Container Id does NOT change because NM doesn't create the ContainerId. It 
is provided to it by the AM and here the RM is not involved in creating a new 
container allocation.




> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> Ref: YARN-5637
> Ability to upgrade dockerized  yarn native services.
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> {{reInitializeContainer}} of the container on the NM performs these steps:
> - kill the existing process
> - cleanup the container
> - launch another container with the new {{ContainerLaunchContext}}
> The Container Id does NOT change because NM doesn't create the ContainerId. 
> It is provided to it by the AM and here the RM is not involved in creating a 
> new container allocation.
> An importan



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8600) RegistryDNS hang when remote lookup does not reply

2018-08-01 Thread Shane Kumpf (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-8600:
--
Target Version/s: 3.2.0, 3.1.1  (was: 3.2.0, 3.1.2)

> RegistryDNS hang when remote lookup does not reply
> --
>
> Key: YARN-8600
> URL: https://issues.apache.org/jira/browse/YARN-8600
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Critical
> Attachments: YARN-8600.001.patch, YARN-8600.002.patch, 
> YARN-8600.003.patch
>
>
> If lookup type mismatch with the record to query, remote DNS server might not 
> reply.  For example looking up a CNAME record with a PTR address: 
> 1.76.27.172.in-addr.arpa.  This can hang registryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8160) Yarn Service Upgrade: Support upgrade of service that use docker containers

2018-08-01 Thread Chandni Singh (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8160:

Description: 
Ability to upgrade dockerized  yarn native services.

Ref: YARN-5637

Ref: YARN-5637
Ability to upgrade dockerized  yarn native services.

Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
{{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
container.
{{reInitializeContainer}} of the container on the NM performs these steps:
- kill the existing process
- cleanup the container
- launch another container with the new {{ContainerLaunchContext}}

The Container Id does NOT change because NM doesn't create the ContainerId. It 
is provided to it by the AM and here the RM is not involved in creating a new 
container allocation.



  was:
Ability to upgrade dockerized  yarn native services.

Ref: YARN-5637


> Yarn Service Upgrade: Support upgrade of service that use docker containers 
> 
>
> Key: YARN-8160
> URL: https://issues.apache.org/jira/browse/YARN-8160
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>  Labels: Docker
>
> Ability to upgrade dockerized  yarn native services.
> Ref: YARN-5637
> Ref: YARN-5637
> Ability to upgrade dockerized  yarn native services.
> Container upgrade is supported by the NM via {{reInitializeContainer}} api. 
> {{reInitializeContainer}} does *NOT* change the ContainerId of the upgraded 
> container.
> {{reInitializeContainer}} of the container on the NM performs these steps:
> - kill the existing process
> - cleanup the container
> - launch another container with the new {{ContainerLaunchContext}}
> The Container Id does NOT change because NM doesn't create the ContainerId. 
> It is provided to it by the AM and here the RM is not involved in creating a 
> new container allocation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7974) Allow updating application tracking url after registration

2018-08-01 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565689#comment-16565689
 ] 

Jonathan Hung edited comment on YARN-7974 at 8/1/18 5:43 PM:
-

Unit tests timed out with: 
{noformat}
Running unit tests




cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 
/testptch/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 2>&1
Elapsed:   0m 45s
cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 
/testptch/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 2>&1
Elapsed:   3m 34s
cd 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 
/testptch/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 2>&1
Elapsed:  68m 16s
cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 
/testptch/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 2>&1
Build timed out (after 500 minutes). Marking the build as aborted.
Build was aborted
Performing Post build task...
Match found for :. : True
Logical operation result is TRUE
Running script  : #!/bin/bash

# See HADOOP-13951
find "${WORKSPACE}" -name target | xargs chmod -R u+w
ERROR: Caught signal. Killing docker container:
[PreCommit-YARN-Build] $ /bin/bash /tmp/jenkins8430165585137540710.sh
a8f0f008777cf712b58b1485c7d93d2fef1d22d6a3f2a572c4a6469f3941f83f
POST BUILD TASK : SUCCESS
END OF POST BUILD TASK : 0
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Finished: ABORTED{noformat}
https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-YARN-Build/21456/consoleFull
Reattaching patch.


was (Author: jhung):
Unit tests timed out with: 
{noformat}
Running unit tests




cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 
/testptch/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 2>&1
Elapsed:   0m 45s
cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 
/testptch/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 2>&1
Elapsed:   3m 34s
cd 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 

[jira] [Updated] (YARN-7974) Allow updating application tracking url after registration

2018-08-01 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-7974:

Attachment: YARN-7974-branch-2.001.patch

> Allow updating application tracking url after registration
> --
>
> Key: YARN-7974
> URL: https://issues.apache.org/jira/browse/YARN-7974
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-7974-branch-2.001.patch, YARN-7974.001.patch, 
> YARN-7974.002.patch, YARN-7974.003.patch, YARN-7974.004.patch, 
> YARN-7974.005.patch, YARN-7974.006.patch
>
>
> Normally an application's tracking url is set on AM registration. We have a 
> use case for updating the tracking url after registration (e.g. the UI is 
> hosted on one of the containers).
> Approach is for AM to update tracking url on heartbeat to RM, and add related 
> API in AMRMClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7974) Allow updating application tracking url after registration

2018-08-01 Thread Jonathan Hung (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565689#comment-16565689
 ] 

Jonathan Hung commented on YARN-7974:
-

Unit tests timed out with: 
{noformat}
Running unit tests




cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 
/testptch/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 2>&1
Elapsed:   0m 45s
cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 
/testptch/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 2>&1
Elapsed:   3m 34s
cd 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 
/testptch/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 2>&1
Elapsed:  68m 16s
cd /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client
/opt/maven/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/yetus-m2/hadoop-branch-2-patch-0 -Ptest-patch 
-Pparallel-tests -P!shelltest -Pnative -Drequire.fuse -Drequire.openssl 
-Drequire.snappy -Drequire.valgrind -Drequire.test.libhadoop -Pyarn-ui clean 
test -fae > 
/testptch/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
 2>&1
Build timed out (after 500 minutes). Marking the build as aborted.
Build was aborted
Performing Post build task...
Match found for :. : True
Logical operation result is TRUE
Running script  : #!/bin/bash

# See HADOOP-13951
find "${WORKSPACE}" -name target | xargs chmod -R u+w
ERROR: Caught signal. Killing docker container:
[PreCommit-YARN-Build] $ /bin/bash /tmp/jenkins8430165585137540710.sh
a8f0f008777cf712b58b1485c7d93d2fef1d22d6a3f2a572c4a6469f3941f83f
POST BUILD TASK : SUCCESS
END OF POST BUILD TASK : 0
Archiving artifacts
[description-setter] Could not determine description.
Recording test results
Finished: ABORTED{noformat}

Reattaching patch.

> Allow updating application tracking url after registration
> --
>
> Key: YARN-7974
> URL: https://issues.apache.org/jira/browse/YARN-7974
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-7974-branch-2.001.patch, YARN-7974.001.patch, 
> YARN-7974.002.patch, YARN-7974.003.patch, YARN-7974.004.patch, 
> YARN-7974.005.patch, YARN-7974.006.patch
>
>
> Normally an application's tracking url is set on AM registration. We have a 
> use case for updating the tracking url after registration (e.g. the UI is 
> hosted on one of the containers).
> Approach is for AM to update tracking url on heartbeat to RM, and add related 
> API in AMRMClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7974) Allow updating application tracking url after registration

2018-08-01 Thread Jonathan Hung (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated YARN-7974:

Attachment: (was: YARN-7974-branch-2.001.patch)

> Allow updating application tracking url after registration
> --
>
> Key: YARN-7974
> URL: https://issues.apache.org/jira/browse/YARN-7974
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 3.2.0, 3.0.4, 3.1.2
>
> Attachments: YARN-7974-branch-2.001.patch, YARN-7974.001.patch, 
> YARN-7974.002.patch, YARN-7974.003.patch, YARN-7974.004.patch, 
> YARN-7974.005.patch, YARN-7974.006.patch
>
>
> Normally an application's tracking url is set on AM registration. We have a 
> use case for updating the tracking url after registration (e.g. the UI is 
> hosted on one of the containers).
> Approach is for AM to update tracking url on heartbeat to RM, and add related 
> API in AMRMClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8263) DockerClient still touches hadoop.tmp.dir

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565661#comment-16565661
 ] 

genericqa commented on YARN-8263:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 18m 
20s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8263 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933936/YARN-8263.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux aa4950975d70 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 67c65da |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21473/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21473/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> DockerClient still touches hadoop.tmp.dir
> 

[jira] [Commented] (YARN-8600) RegistryDNS hang when remote lookup does not reply

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565632#comment-16565632
 ] 

genericqa commented on YARN-8600:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 50s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
53s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8600 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933935/YARN-8600.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 20ae23f61b9e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 67c65da |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21472/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21472/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> RegistryDNS hang when remote lookup does not reply
> --
>
> Key: 

[jira] [Commented] (YARN-8155) Improve ATSv2 client logging in RM and NM publisher

2018-08-01 Thread Rohith Sharma K S (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565627#comment-16565627
 ] 

Rohith Sharma K S commented on YARN-8155:
-

+1 lgtm

> Improve ATSv2 client logging in RM and NM publisher
> ---
>
> Key: YARN-8155
> URL: https://issues.apache.org/jira/browse/YARN-8155
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Rohith Sharma K S
>Assignee: Abhishek Modi
>Priority: Major
> Fix For: 2.10.0, 3.2.0, 3.1.1, 3.0.4
>
> Attachments: YARN-8155-branch-2.002.patch, 
> YARN-8155-branch-2.v1.patch, YARN-8155-branch-2.v3.patch, 
> YARN-8155.001.patch, YARN-8155.002.patch, YARN-8155.003.patch, 
> YARN-8155.004.patch, YARN-8155.005.patch, YARN-8155.006.patch
>
>
> We see that NM logs are filled with larger stack trace of NotFoundException 
> if collector is removed from one of the NM and other NMs are still publishing 
> the entities.
>  
> This Jira is to improve the logging in NM so that we log with informative 
> message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8403) Nodemanager logs failed to download file with INFO level

2018-08-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565587#comment-16565587
 ] 

Eric Yang commented on YARN-8403:
-

Thank you [~billie.rinaldi] for the review and commit.

> Nodemanager logs failed to download file with INFO level
> 
>
> Key: YARN-8403
> URL: https://issues.apache.org/jira/browse/YARN-8403
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Fix For: 3.2.0, 3.1.2
>
> Attachments: YARN-8403.001.patch, YARN-8403.002.patch, 
> YARN-8403.003.patch, YARN-8403.png
>
>
> Some of the container execution related stack traces are printing in INFO or 
> WARN level. 
> {code}
> 2018-06-06 03:10:40,077 INFO  localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:writeCredentials(1312)) - Writing 
> credentials to the nmPrivate file 
> /grid/0/hadoop/yarn/local/nmPrivate/container_e02_1528246317583_0048_01_01.tokens
> 2018-06-06 03:10:40,087 INFO  localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:run(975)) - Failed to download resource { { 
> hdfs://mycluster.example.com:8020/user/hrt_qa/Streaming/InputDir, 
> 1528254452720, FILE, null 
> },pending,[(container_e02_1528246317583_0048_01_01)],6074418082915225,DOWNLOADING}
> org.apache.hadoop.yarn.exceptions.YarnException: Download and unpack failed
> at 
> org.apache.hadoop.yarn.util.FSDownload.downloadAndUnpack(FSDownload.java:306)
> at 
> org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:283)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:409)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:66)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: 
> /grid/0/hadoop/yarn/local/filecache/28_tmp/InputDir/input1.txt (Permission 
> denied)
> at java.io.FileOutputStream.open0(Native Method)
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> at java.io.FileOutputStream.(FileOutputStream.java:213)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:236)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:219)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:318)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:307)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:338)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:401)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:464)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:408)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:399)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:381)
> at 
> org.apache.hadoop.yarn.util.FSDownload.downloadAndUnpack(FSDownload.java:298)
> ... 9 more
> {code}
> {code}
> 2018-06-06 03:10:41,547 WARN  privileged.PrivilegedOperationExecutor 
> (PrivilegedOperationExecutor.java:executePrivilegedOperation(182)) - 
> IOException executing command:
> java.io.InterruptedIOException: java.lang.InterruptedException
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1012)
> at org.apache.hadoop.util.Shell.run(Shell.java:902)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(LinuxContainerExecutor.java:402)
> at 
> 

[jira] [Commented] (YARN-7948) Enable refreshing maximum allocation for multiple resource types

2018-08-01 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565579#comment-16565579
 ] 

Haibo Chen commented on YARN-7948:
--

There are a couple of newly added import statements in TestFairScheduler. 
Because there is no code change, those import statements are unused and 
therefore can be removed.
{quote}Default resources (cpu, memory) are already covered in my testcase in 
patch002 so I haven't modified this part.
{quote}
Indeed. My apologies for missing that.

While not a strict requirement, given that the vast majority of the code base 
uses 4 spaces as line continuation, it'd be nice to keep the style consistent.

Otherwise, the patch looks fine to me.

> Enable refreshing maximum allocation for multiple resource types
> 
>
> Key: YARN-7948
> URL: https://issues.apache.org/jira/browse/YARN-7948
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.0.0
>Reporter: Yufei Gu
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-7948.001.patch, YARN-7948.002.patch, 
> YARN-7948.003.patch
>
>
> YARN-7738 did the same thing for CS. We need a fix for FS. We could fix it by 
> moving the refresh code from class CS to class AbstractYARNScheduler. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8403) Nodemanager logs failed to download file with INFO level

2018-08-01 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565573#comment-16565573
 ] 

Hudson commented on YARN-8403:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14687 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14687/])
YARN-8403. Change the log level for fail to download resource from INFO 
(billie: rev 67c65da261464a0dccb63dc27668109a52e05714)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java


> Nodemanager logs failed to download file with INFO level
> 
>
> Key: YARN-8403
> URL: https://issues.apache.org/jira/browse/YARN-8403
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-8403.001.patch, YARN-8403.002.patch, 
> YARN-8403.003.patch, YARN-8403.png
>
>
> Some of the container execution related stack traces are printing in INFO or 
> WARN level. 
> {code}
> 2018-06-06 03:10:40,077 INFO  localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:writeCredentials(1312)) - Writing 
> credentials to the nmPrivate file 
> /grid/0/hadoop/yarn/local/nmPrivate/container_e02_1528246317583_0048_01_01.tokens
> 2018-06-06 03:10:40,087 INFO  localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:run(975)) - Failed to download resource { { 
> hdfs://mycluster.example.com:8020/user/hrt_qa/Streaming/InputDir, 
> 1528254452720, FILE, null 
> },pending,[(container_e02_1528246317583_0048_01_01)],6074418082915225,DOWNLOADING}
> org.apache.hadoop.yarn.exceptions.YarnException: Download and unpack failed
> at 
> org.apache.hadoop.yarn.util.FSDownload.downloadAndUnpack(FSDownload.java:306)
> at 
> org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:283)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:409)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:66)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: 
> /grid/0/hadoop/yarn/local/filecache/28_tmp/InputDir/input1.txt (Permission 
> denied)
> at java.io.FileOutputStream.open0(Native Method)
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> at java.io.FileOutputStream.(FileOutputStream.java:213)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:236)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:219)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:318)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:307)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:338)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:401)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:464)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:408)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:399)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:381)
> at 
> org.apache.hadoop.yarn.util.FSDownload.downloadAndUnpack(FSDownload.java:298)
> ... 9 more
> {code}
> {code}
> 2018-06-06 03:10:41,547 WARN  privileged.PrivilegedOperationExecutor 
> (PrivilegedOperationExecutor.java:executePrivilegedOperation(182)) - 
> IOException executing command:
> java.io.InterruptedIOException: java.lang.InterruptedException
> at 

[jira] [Commented] (YARN-8263) DockerClient still touches hadoop.tmp.dir

2018-08-01 Thread Craig Condit (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565547#comment-16565547
 ] 

Craig Condit commented on YARN-8263:


Uploaded patch 002. 'conf' argument removed from 
DockerCommand#preparePrvilegedOperation and 
DockerCommandExecutor:executeDockerCommand. I think I've found and removed all 
the transitive references. Still applies to both trunk and branch-3.1.

> DockerClient still touches hadoop.tmp.dir
> -
>
> Key: YARN-8263
> URL: https://issues.apache.org/jira/browse/YARN-8263
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Jason Lowe
>Assignee: Craig Condit
>Priority: Minor
>  Labels: Docker
> Attachments: YARN-8263.001.patch, YARN-8263.002.patch
>
>
> The DockerClient constructor fails if hadoop.tmp.dir is not set and proceeds 
> to create a directory there.  After YARN-8064 there's no longer a need to 
> touch the temporary directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8468) Limit container sizes per queue in FairScheduler

2018-08-01 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565544#comment-16565544
 ] 

Haibo Chen commented on YARN-8468:
--

Sorry, I meant 4 spaces for line continuation. My bad.

The reason why I think we should rename max container resources (as field or 
method names) to max container allocation, is that we are consistent with the 
naming of the scheduler-level configuration, maximum-allocation-mb or 
maximum-allocation-vcore. The maxContainerResources here refers to the max 
allocation size of any container allocated, whereas minResources, and 
maxResources are the min/max amount of resources that can be allocated to a 
queue. They are not similar in semantics. Hope that helps.

> Limit container sizes per queue in FairScheduler
> 
>
> Key: YARN-8468
> URL: https://issues.apache.org/jira/browse/YARN-8468
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Critical
> Attachments: YARN-8468.000.patch, YARN-8468.001.patch, 
> YARN-8468.002.patch, YARN-8468.003.patch
>
>
> When using any scheduler, you can use "yarn.scheduler.maximum-allocation-mb" 
> to limit the overall size of a container. This applies globally to all 
> containers and cannot be limited by queue or and is not scheduler dependent.
>  
> The goal of this ticket is to allow this value to be set on a per queue basis.
>  
> The use case: User has two pools, one for ad hoc jobs and one for enterprise 
> apps. User wants to limit ad hoc jobs to small containers but allow 
> enterprise apps to request as many resources as needed. Setting 
> yarn.scheduler.maximum-allocation-mb sets a default value for maximum 
> container size for all queues and setting maximum resources per queue with 
> “maxContainerResources” queue config value.
>  
> Suggested solution:
>  
> All the infrastructure is already in the code. We need to do the following:
>  * add the setting to the queue properties for all queue types (parent and 
> leaf), this will cover dynamically created queues.
>  * if we set it on the root we override the scheduler setting and we should 
> not allow that.
>  * make sure that queue resource cap can not be larger than scheduler max 
> resource cap in the config.
>  * implement getMaximumResourceCapability(String queueName) in the 
> FairScheduler
>  * implement getMaximumResourceCapability() in both FSParentQueue and 
> FSLeafQueue as follows
>  * expose the setting in the queue information in the RM web UI.
>  * expose the setting in the metrics etc for the queue.
>  * write JUnit tests.
>  * update the scheduler documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8263) DockerClient still touches hadoop.tmp.dir

2018-08-01 Thread Craig Condit (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8263?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Craig Condit updated YARN-8263:
---
Attachment: YARN-8263.002.patch

> DockerClient still touches hadoop.tmp.dir
> -
>
> Key: YARN-8263
> URL: https://issues.apache.org/jira/browse/YARN-8263
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: Jason Lowe
>Assignee: Craig Condit
>Priority: Minor
>  Labels: Docker
> Attachments: YARN-8263.001.patch, YARN-8263.002.patch
>
>
> The DockerClient constructor fails if hadoop.tmp.dir is not set and proceeds 
> to create a directory there.  After YARN-8064 there's no longer a need to 
> touch the temporary directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8600) RegistryDNS hang when remote lookup does not reply

2018-08-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565536#comment-16565536
 ] 

Eric Yang commented on YARN-8600:
-

Patch 003 updated test case timeout, and improved logging for timed out queries.

> RegistryDNS hang when remote lookup does not reply
> --
>
> Key: YARN-8600
> URL: https://issues.apache.org/jira/browse/YARN-8600
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Critical
> Attachments: YARN-8600.001.patch, YARN-8600.002.patch, 
> YARN-8600.003.patch
>
>
> If lookup type mismatch with the record to query, remote DNS server might not 
> reply.  For example looking up a CNAME record with a PTR address: 
> 1.76.27.172.in-addr.arpa.  This can hang registryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8600) RegistryDNS hang when remote lookup does not reply

2018-08-01 Thread Eric Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8600:

Attachment: YARN-8600.003.patch

> RegistryDNS hang when remote lookup does not reply
> --
>
> Key: YARN-8600
> URL: https://issues.apache.org/jira/browse/YARN-8600
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Critical
> Attachments: YARN-8600.001.patch, YARN-8600.002.patch, 
> YARN-8600.003.patch
>
>
> If lookup type mismatch with the record to query, remote DNS server might not 
> reply.  For example looking up a CNAME record with a PTR address: 
> 1.76.27.172.in-addr.arpa.  This can hang registryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8403) Nodemanager logs failed to download file with INFO level

2018-08-01 Thread Billie Rinaldi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565524#comment-16565524
 ] 

Billie Rinaldi commented on YARN-8403:
--

+1 for patch 3. Thanks for the patch, [~eyang]!

> Nodemanager logs failed to download file with INFO level
> 
>
> Key: YARN-8403
> URL: https://issues.apache.org/jira/browse/YARN-8403
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Major
> Attachments: YARN-8403.001.patch, YARN-8403.002.patch, 
> YARN-8403.003.patch, YARN-8403.png
>
>
> Some of the container execution related stack traces are printing in INFO or 
> WARN level. 
> {code}
> 2018-06-06 03:10:40,077 INFO  localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:writeCredentials(1312)) - Writing 
> credentials to the nmPrivate file 
> /grid/0/hadoop/yarn/local/nmPrivate/container_e02_1528246317583_0048_01_01.tokens
> 2018-06-06 03:10:40,087 INFO  localizer.ResourceLocalizationService 
> (ResourceLocalizationService.java:run(975)) - Failed to download resource { { 
> hdfs://mycluster.example.com:8020/user/hrt_qa/Streaming/InputDir, 
> 1528254452720, FILE, null 
> },pending,[(container_e02_1528246317583_0048_01_01)],6074418082915225,DOWNLOADING}
> org.apache.hadoop.yarn.exceptions.YarnException: Download and unpack failed
> at 
> org.apache.hadoop.yarn.util.FSDownload.downloadAndUnpack(FSDownload.java:306)
> at 
> org.apache.hadoop.yarn.util.FSDownload.verifyAndCopy(FSDownload.java:283)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:409)
> at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:66)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.FileNotFoundException: 
> /grid/0/hadoop/yarn/local/filecache/28_tmp/InputDir/input1.txt (Permission 
> denied)
> at java.io.FileOutputStream.open0(Native Method)
> at java.io.FileOutputStream.open(FileOutputStream.java:270)
> at java.io.FileOutputStream.(FileOutputStream.java:213)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:236)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem$LocalFSFileOutputStream.(RawLocalFileSystem.java:219)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.createOutputStreamWithMode(RawLocalFileSystem.java:318)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:307)
> at 
> org.apache.hadoop.fs.RawLocalFileSystem.create(RawLocalFileSystem.java:338)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSOutputSummer.(ChecksumFileSystem.java:401)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:464)
> at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1038)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:408)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:399)
> at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:381)
> at 
> org.apache.hadoop.yarn.util.FSDownload.downloadAndUnpack(FSDownload.java:298)
> ... 9 more
> {code}
> {code}
> 2018-06-06 03:10:41,547 WARN  privileged.PrivilegedOperationExecutor 
> (PrivilegedOperationExecutor.java:executePrivilegedOperation(182)) - 
> IOException executing command:
> java.io.InterruptedIOException: java.lang.InterruptedException
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1012)
> at org.apache.hadoop.util.Shell.run(Shell.java:902)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.startLocalizer(LinuxContainerExecutor.java:402)
> at 
> 

[jira] [Commented] (YARN-8600) RegistryDNS hang when remote lookup does not reply

2018-08-01 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565518#comment-16565518
 ] 

Eric Yang commented on YARN-8600:
-

[~shaneku...@gmail.com] Good suggestions.  For timeout increase, I will make it 
5 seconds instead of current 1.5 seconds.  I will make changes accordingly and 
upload patch 003 in a few minutes.

> RegistryDNS hang when remote lookup does not reply
> --
>
> Key: YARN-8600
> URL: https://issues.apache.org/jira/browse/YARN-8600
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Critical
> Attachments: YARN-8600.001.patch, YARN-8600.002.patch
>
>
> If lookup type mismatch with the record to query, remote DNS server might not 
> reply.  For example looking up a CNAME record with a PTR address: 
> 1.76.27.172.in-addr.arpa.  This can hang registryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8468) Limit container sizes per queue in FairScheduler

2018-08-01 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565514#comment-16565514
 ] 

Antal Bálint Steinbach commented on YARN-8468:
--

Hi [~haibochen]!

Thanks for reviewing. I will upload a patch soon, before that I have some 
questions:
1) I added "ContainerMaxAllocationCalculator" and removed validation/exception 
throw
2a) I am not sure what to check, what is expected here. [~wilfreds] Can you 
please help me figure out this. I will write on hangouts/chat.
2b) Removed from Metrics
3) In the 
[doc|http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/FairScheduler.html#Configuration]
 and in the code _Resource_ is used for other queue parameters like 
minResources, maxResources, maxChildResources. So I followed this in the method 
names. Can you please clarify where shall I rename?

4) I see 2 space indentation everywhere in the code and team members told me 2 
space is the convention. I am using _hadoop-format.xml_ as a formatter 
template. I am confused.

> Limit container sizes per queue in FairScheduler
> 
>
> Key: YARN-8468
> URL: https://issues.apache.org/jira/browse/YARN-8468
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Antal Bálint Steinbach
>Assignee: Antal Bálint Steinbach
>Priority: Critical
> Attachments: YARN-8468.000.patch, YARN-8468.001.patch, 
> YARN-8468.002.patch, YARN-8468.003.patch
>
>
> When using any scheduler, you can use "yarn.scheduler.maximum-allocation-mb" 
> to limit the overall size of a container. This applies globally to all 
> containers and cannot be limited by queue or and is not scheduler dependent.
>  
> The goal of this ticket is to allow this value to be set on a per queue basis.
>  
> The use case: User has two pools, one for ad hoc jobs and one for enterprise 
> apps. User wants to limit ad hoc jobs to small containers but allow 
> enterprise apps to request as many resources as needed. Setting 
> yarn.scheduler.maximum-allocation-mb sets a default value for maximum 
> container size for all queues and setting maximum resources per queue with 
> “maxContainerResources” queue config value.
>  
> Suggested solution:
>  
> All the infrastructure is already in the code. We need to do the following:
>  * add the setting to the queue properties for all queue types (parent and 
> leaf), this will cover dynamically created queues.
>  * if we set it on the root we override the scheduler setting and we should 
> not allow that.
>  * make sure that queue resource cap can not be larger than scheduler max 
> resource cap in the config.
>  * implement getMaximumResourceCapability(String queueName) in the 
> FairScheduler
>  * implement getMaximumResourceCapability() in both FSParentQueue and 
> FSLeafQueue as follows
>  * expose the setting in the queue information in the RM web UI.
>  * expose the setting in the metrics etc for the queue.
>  * write JUnit tests.
>  * update the scheduler documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8609) NM oom because of large container statuses

2018-08-01 Thread Jason Lowe (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565511#comment-16565511
 ] 

Jason Lowe commented on YARN-8609:
--

Thanks for the report and patch!

IMHO any truncation should not be tied to recovery, as the NM could OOM just 
tracking container diagnostics.  Recovery involves reloading what was already 
in memory before the crash/restart.  If the diagnostics of a container were 27M 
in the recovery file then that means it was 27M in the NM heap before it 
recovered as well.

Recovery does take more memory to recover than normal operations, and YARN-8242 
and the work there will help reduce that load.  Rather than forcing a rather 
draconian truncation (27M to 5000 bytes is rather extreme), this should be a 
configurable setting and applied when diagnostics are added to a container 
rather than upon recovery.  See ContainerImpl#addDiagnostics.  Otherwise 
reported container statuses will suddenly will change when the NM restarts and 
that is counter to the goals of the NM recovery feature.


> NM oom because of large container statuses
> --
>
> Key: YARN-8609
> URL: https://issues.apache.org/jira/browse/YARN-8609
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Xianghao Lu
>Priority: Major
> Attachments: YARN-8609.001.patch, contain_status.jpg, oom.jpeg
>
>
> Sometimes, NodeManger will send large container statuses to ResourceManager 
> when NodeManger start with recovering, as a result , NodeManger will be 
> failed to start because of oom.
>  In my case, the large container statuses size is 135M, which contain 11 
> container statuses, and I find the diagnostics of 5 containers are very 
> large(27M), so, I truncate the container diagnostics as the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8594) [UI2] Show the current logged in user in UI2

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565308#comment-16565308
 ] 

genericqa commented on YARN-8594:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
38m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8594 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933908/YARN-8594.003.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux 3523f4b5f879 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d920b9d |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 334 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21471/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Show the current logged in user in UI2
> 
>
> Key: YARN-8594
> URL: https://issues.apache.org/jira/browse/YARN-8594
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8594.001.patch, YARN-8594.002.patch, 
> YARN-8594.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8593) Add new RM web service endpoint to get cluster user info

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565283#comment-16565283
 ] 

genericqa commented on YARN-8593:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 30s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 72m  
6s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
20s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}137m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-8593 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933899/YARN-8593.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 243827bfd550 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d920b9d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21467/testReport/ |
| Max. process+thread count | 889 (vs. ulimit of 1) |
| 

[jira] [Commented] (YARN-8600) RegistryDNS hang when remote lookup does not reply

2018-08-01 Thread Shane Kumpf (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565277#comment-16565277
 ] 

Shane Kumpf commented on YARN-8600:
---

Thanks for the patch, [~eyang]. I had some trouble finding an example lookup to 
test, but I was able to find a repro. I can confirm this resolves the issue.

Two minor nits:
* The timeout on the test should be increased as the thread may not have timed 
out by the timeout trigger, causing the test to fail.
* When the timeout exception occurs, I would like to see the query type in the 
exception message to help aid in figuring out the exact query that hung.

I'm +1 on this patch with those changes. I can make these minor changes at 
commit time if you'd like [~eyang]. I'll commit this with those changes later 
today unless there are other concerns.

> RegistryDNS hang when remote lookup does not reply
> --
>
> Key: YARN-8600
> URL: https://issues.apache.org/jira/browse/YARN-8600
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Critical
> Attachments: YARN-8600.001.patch, YARN-8600.002.patch
>
>
> If lookup type mismatch with the record to query, remote DNS server might not 
> reply.  For example looking up a CNAME record with a PTR address: 
> 1.76.27.172.in-addr.arpa.  This can hang registryDNS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565274#comment-16565274
 ] 

genericqa commented on YARN-3841:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 26s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-3841 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933904/YARN-3841.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 404e2c44221e 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d920b9d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21470/testReport/ |
| Max. process+thread count | 337 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21470/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [Storage implementation] Adding retry semantics 

[jira] [Commented] (YARN-8594) [UI2] Show the current logged in user in UI2

2018-08-01 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565211#comment-16565211
 ] 

Akhil PB commented on YARN-8594:


updated rebased v3 patch. cc [~sunilg]

> [UI2] Show the current logged in user in UI2
> 
>
> Key: YARN-8594
> URL: https://issues.apache.org/jira/browse/YARN-8594
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8594.001.patch, YARN-8594.002.patch, 
> YARN-8594.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565212#comment-16565212
 ] 

genericqa commented on YARN-3879:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 25s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-3879 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933901/YARN-3879.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bf490060f2f2 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / d920b9d |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21469/testReport/ |
| Max. process+thread count | 301 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21469/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT 

[jira] [Updated] (YARN-8594) [UI2] Show the current logged in user in UI2

2018-08-01 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-8594:
---
Attachment: YARN-8594.003.patch

> [UI2] Show the current logged in user in UI2
> 
>
> Key: YARN-8594
> URL: https://issues.apache.org/jira/browse/YARN-8594
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8594.001.patch, YARN-8594.002.patch, 
> YARN-8594.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8593) Add new RM web service endpoint to get cluster user info

2018-08-01 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565207#comment-16565207
 ] 

Sunil Govindan commented on YARN-8593:
--

Looks good. Pending jenkins

> Add new RM web service endpoint to get cluster user info
> 
>
> Key: YARN-8593
> URL: https://issues.apache.org/jira/browse/YARN-8593
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8593.001.patch, YARN-8593.002.patch, 
> YARN-8593.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8594) [UI2] Show the current logged in user in UI2

2018-08-01 Thread Sunil Govindan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565205#comment-16565205
 ] 

Sunil Govindan commented on YARN-8594:
--

[~akhilpb] pls help to rebase patch against trunk.

> [UI2] Show the current logged in user in UI2
> 
>
> Key: YARN-8594
> URL: https://issues.apache.org/jira/browse/YARN-8594
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8594.001.patch, YARN-8594.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8609) NM oom because of large container statuses

2018-08-01 Thread Xianghao Lu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianghao Lu updated YARN-8609:
--
Description: 
Sometimes, NodeManger will send large container statuses to ResourceManager 
when NodeManger start with recovering, as a result , NodeManger will be failed 
to start because of oom.
 In my case, the large container statuses size is 135M, which contain 11 
container statuses, and I find the diagnostics of 5 containers are very 
large(27M), so, I truncate the container diagnostics as the patch.

  was:
Sometimes, NodeManger will send large container statuses to ResourceManager 
when NodeManger recovering, as a result , NodeManger will be failed to start 
because of oom.
In my case, the large container statuses size is 135M, which contain 11 
container statuses, and I find the diagnostics of 5 container is very 
large(27M), so, I truncate the container diagnostics as the patch.


> NM oom because of large container statuses
> --
>
> Key: YARN-8609
> URL: https://issues.apache.org/jira/browse/YARN-8609
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Xianghao Lu
>Priority: Major
> Attachments: YARN-8609.001.patch, contain_status.jpg, oom.jpeg
>
>
> Sometimes, NodeManger will send large container statuses to ResourceManager 
> when NodeManger start with recovering, as a result , NodeManger will be 
> failed to start because of oom.
>  In my case, the large container statuses size is 135M, which contain 11 
> container statuses, and I find the diagnostics of 5 containers are very 
> large(27M), so, I truncate the container diagnostics as the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8609) NM oom because of large container statuses

2018-08-01 Thread Xianghao Lu (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565195#comment-16565195
 ] 

Xianghao Lu commented on YARN-8609:
---

Seems to be related to https://issues.apache.org/jira/browse/YARN-2115,  
[~jianhe] , Woud you like to review the patch?

> NM oom because of large container statuses
> --
>
> Key: YARN-8609
> URL: https://issues.apache.org/jira/browse/YARN-8609
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Xianghao Lu
>Priority: Major
> Attachments: YARN-8609.001.patch, contain_status.jpg, oom.jpeg
>
>
> Sometimes, NodeManger will send large container statuses to ResourceManager 
> when NodeManger recovering, as a result , NodeManger will be failed to start 
> because of oom.
> In my case, the large container statuses size is 135M, which contain 11 
> container statuses, and I find the diagnostics of 5 container is very 
> large(27M), so, I truncate the container diagnostics as the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8609) NM oom because of large container statuses

2018-08-01 Thread Xianghao Lu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianghao Lu updated YARN-8609:
--
Attachment: oom.jpeg

> NM oom because of large container statuses
> --
>
> Key: YARN-8609
> URL: https://issues.apache.org/jira/browse/YARN-8609
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Xianghao Lu
>Priority: Major
> Attachments: YARN-8609.001.patch, contain_status.jpg, oom.jpeg
>
>
> Sometimes, NodeManger will send large container statuses to ResourceManager 
> when NodeManger recovering, as a result , NodeManger will be failed to start 
> because of oom.
> In my case, the large container statuses size is 135M, which contain 11 
> container statuses, and I find the diagnostics of 5 container is very 
> large(27M), so, I truncate the container diagnostics as the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8609) NM oom because of large container statuses

2018-08-01 Thread Xianghao Lu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianghao Lu updated YARN-8609:
--
Attachment: YARN-8609.001.patch

> NM oom because of large container statuses
> --
>
> Key: YARN-8609
> URL: https://issues.apache.org/jira/browse/YARN-8609
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Xianghao Lu
>Priority: Major
> Attachments: YARN-8609.001.patch, contain_status.jpg
>
>
> Sometimes, NodeManger will send large container statuses to ResourceManager 
> when NodeManger recovering, as a result , NodeManger will be failed to start 
> because of oom.
> In my case, the large container statuses size is 135M, which contain 11 
> container statuses, and I find the diagnostics of 5 container is very 
> large(27M), so, I truncate the container diagnostics as the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8609) NM oom because of large container statuses

2018-08-01 Thread Xianghao Lu (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianghao Lu updated YARN-8609:
--
Attachment: contain_status.jpg

> NM oom because of large container statuses
> --
>
> Key: YARN-8609
> URL: https://issues.apache.org/jira/browse/YARN-8609
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Xianghao Lu
>Priority: Major
> Attachments: YARN-8609.001.patch, contain_status.jpg
>
>
> Sometimes, NodeManger will send large container statuses to ResourceManager 
> when NodeManger recovering, as a result , NodeManger will be failed to start 
> because of oom.
> In my case, the large container statuses size is 135M, which contain 11 
> container statuses, and I find the diagnostics of 5 container is very 
> large(27M), so, I truncate the container diagnostics as the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8609) NM oom because of large container statuses

2018-08-01 Thread Xianghao Lu (JIRA)
Xianghao Lu created YARN-8609:
-

 Summary: NM oom because of large container statuses
 Key: YARN-8609
 URL: https://issues.apache.org/jira/browse/YARN-8609
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Xianghao Lu


Sometimes, NodeManger will send large container statuses to ResourceManager 
when NodeManger recovering, as a result , NodeManger will be failed to start 
because of oom.
In my case, the large container statuses size is 135M, which contain 11 
container statuses, and I find the diagnostics of 5 container is very 
large(27M), so, I truncate the container diagnostics as the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2018-08-01 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-3841:

Attachment: YARN-3841.004.patch

> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3841-YARN-7055.002.patch, YARN-3841.001.patch, 
> YARN-3841.002.patch, YARN-3841.003.patch, YARN-3841.004.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2018-08-01 Thread Abhishek Modi (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abhishek Modi updated YARN-3879:

Attachment: YARN-3879.004.patch

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Abhishek Modi
>Priority: Major
>  Labels: YARN-5355
> Attachments: YARN-3879-YARN-7055.001.patch, YARN-3879.001.patch, 
> YARN-3879.002.patch, YARN-3879.003.patch, YARN-3879.004.patch
>
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8594) [UI2] Show the current logged in user in UI2

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565121#comment-16565121
 ] 

genericqa commented on YARN-8594:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-8594 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-8594 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933900/YARN-8594.002.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21468/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [UI2] Show the current logged in user in UI2
> 
>
> Key: YARN-8594
> URL: https://issues.apache.org/jira/browse/YARN-8594
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8594.001.patch, YARN-8594.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8593) Add new RM web service endpoint to get cluster user info

2018-08-01 Thread Akhil PB (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565118#comment-16565118
 ] 

Akhil PB commented on YARN-8593:


Updated v3 patch which incorporates comments.

[~sunilg] [~rohithsharma] Please review the v3 patch.

> Add new RM web service endpoint to get cluster user info
> 
>
> Key: YARN-8593
> URL: https://issues.apache.org/jira/browse/YARN-8593
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8593.001.patch, YARN-8593.002.patch, 
> YARN-8593.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8594) [UI2] Show the current logged in user in UI2

2018-08-01 Thread Akhil PB (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-8594:
---
Attachment: YARN-8594.002.patch

> [UI2] Show the current logged in user in UI2
> 
>
> Key: YARN-8594
> URL: https://issues.apache.org/jira/browse/YARN-8594
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Attachments: YARN-8594.001.patch, YARN-8594.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7948) Enable refreshing maximum allocation for multiple resource types

2018-08-01 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16565116#comment-16565116
 ] 

genericqa commented on YARN-7948:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 44s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 71m 
52s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:ba1ab08 |
| JIRA Issue | YARN-7948 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12933880/YARN-7948.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux c01125cd9425 4.4.0-130-generic #156-Ubuntu SMP Thu Jun 14 
08:53:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a48a0cc |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21465/testReport/ |
| Max. process+thread count | 896 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21465/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Enable refreshing maximum allocation for multiple 

  1   2   >