[jira] [Commented] (YARN-7654) Support ENTRY_POINT for docker container

2018-04-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451687#comment-16451687
 ] 

genericqa commented on YARN-7654:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
58s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 50s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
30s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
1s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}109m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | YARN-7654 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920474/YARN-7654.017.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux d05f0ac763bf 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/pro

[jira] [Updated] (YARN-8198) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-04-24 Thread Kanwaljeet Sachdev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kanwaljeet Sachdev updated YARN-8198:
-
Attachment: YARN-8198.003.patch

> Add Security-Related HTTP Response Header in Yarn WEBUIs.
> -
>
> Key: YARN-8198
> URL: https://issues.apache.org/jira/browse/YARN-8198
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: YARN-8198.001.patch, YARN-8198.002.patch, 
> YARN-8198.003.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8204) Yarn Service Upgrade: Add a flag to disable upgrade

2018-04-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451671#comment-16451671
 ] 

genericqa commented on YARN-8204:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 44m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 1 new + 54 unchanged - 0 fixed = 55 total (was 54) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 47s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
41s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | YARN-8204 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920553/YARN-8204.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 4819db36db11 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / bb3c504 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20462/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20462/testReport/ |
| Max. process+thread count | 693 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applica

[jira] [Updated] (YARN-8199) Logging fileSize of NM local files

2018-04-24 Thread Prabhu Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-8199:

Description: 
Logging the fileSize of NM local files like log and output files by NodeManager 
before the cleanup will help to find the application which has written huge 
output files and one which logged too verbose.


  was:
Logging the fileSize of NM local files like log and output files by NodeManager 
before the cleanup will help to find the application which has wriien huge 
output files and one which logged too verbose.



> Logging fileSize of NM local files 
> ---
>
> Key: YARN-8199
> URL: https://issues.apache.org/jira/browse/YARN-8199
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: supportability
>
> Logging the fileSize of NM local files like log and output files by 
> NodeManager before the cleanup will help to find the application which has 
> written huge output files and one which logged too verbose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8199) Logging fileSize of NM local files

2018-04-24 Thread Prabhu Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-8199:

Description: 
Logging the fileSize of NM local files like log and output files by NodeManager 
before the cleanup will help to find the application which has wriien huge 
output files and one which logged too verbose.


  was:
Logging the fileSize of NM local files like log and output files by NodeManager 
before the cleanup will help to find the application which is writing huge 
output files and one which logs too verbose.



> Logging fileSize of NM local files 
> ---
>
> Key: YARN-8199
> URL: https://issues.apache.org/jira/browse/YARN-8199
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: supportability
>
> Logging the fileSize of NM local files like log and output files by 
> NodeManager before the cleanup will help to find the application which has 
> wriien huge output files and one which logged too verbose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8199) Logging fileSize of NM local files

2018-04-24 Thread Prabhu Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-8199:

Summary: Logging fileSize of NM local files   (was: Loggng fileSize of NM 
local files )

> Logging fileSize of NM local files 
> ---
>
> Key: YARN-8199
> URL: https://issues.apache.org/jira/browse/YARN-8199
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: supportability
>
> Logging the fileSize of NM local files like log and output files by 
> NodeManager before the cleanup will help to find the application which is 
> writing huge output files and one which logs too verbose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8199) Loggng fileSize of NM local files

2018-04-24 Thread Prabhu Joseph (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated YARN-8199:

Labels: supportability  (was: )

> Loggng fileSize of NM local files 
> --
>
> Key: YARN-8199
> URL: https://issues.apache.org/jira/browse/YARN-8199
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Affects Versions: 2.7.3
>Reporter: Prabhu Joseph
>Assignee: Prabhu Joseph
>Priority: Major
>  Labels: supportability
>
> Logging the fileSize of NM local files like log and output files by 
> NodeManager before the cleanup will help to find the application which is 
> writing huge output files and one which logs too verbose.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7654) Support ENTRY_POINT for docker container

2018-04-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451531#comment-16451531
 ] 

genericqa commented on YARN-7654:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  2s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 13s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m  1s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
55s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}113m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
|   | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | YARN-7654 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920474/YARN-7654.017.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| 

[jira] [Commented] (YARN-7939) Yarn Service Upgrade: add support to upgrade a component instance

2018-04-24 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451518#comment-16451518
 ] 

Chandni Singh commented on YARN-7939:
-

[~eyang] you are correct. There is existing AM transition logic which gets 
triggered after a component stops Flexing and changes service state to STABLE. 
It doesn't take into account if the component is Upgrading. I need to fix this.
{quote}Additional error check logic is recommended to prevent user from calling 
component instance upgrade when service upgrade has not been triggered.
{quote}
I don't understand what you mean by this. We throw a message that service 
upgrade has not been initiated. Even in your case, though the service 
incorrectly transitioned to {{STABLE}} state,  you saw the message:
{code:java}
{"diagnostics":"The upgrade of service abc has not been initiated."}
{code}
 

> Yarn Service Upgrade: add support to upgrade a component instance 
> --
>
> Key: YARN-7939
> URL: https://issues.apache.org/jira/browse/YARN-7939
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7939.001.patch, YARN-7939.002.patch, 
> YARN-7939.003.patch, YARN-7939.004.patch, YARN-7939.005.patch, 
> YARN-7939.006.patch, YARN-7939.007.patch, YARN-7939.008.patch, 
> YARN-7939.009.patch, YARN-7939.010.patch, serviceam.log, upgrade_logs.tgz
>
>
> Yarn core supports in-place upgrade of containers. A yarn service can 
> leverage that to provide in-place upgrade of component instances. Please see 
> YARN-7512 for details.
> Will add support to upgrade a single component instance first and then 
> iteratively add other APIs and features.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8183) Fix ConcurrentModificationException inside RMAppAttemptMetrics#convertAtomicLongMaptoLongMap

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451459#comment-16451459
 ] 

Hudson commented on YARN-8183:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #14059 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14059/])
YARN-8183. Fix ConcurrentModificationException inside (wangda: rev 
bb3c504764f807fccba7f28298a12e2296f284cb)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/attempt/RMAppAttemptMetrics.java


> Fix ConcurrentModificationException inside 
> RMAppAttemptMetrics#convertAtomicLongMaptoLongMap
> 
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8183.1.patch, YARN-8183.2.patch
>
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8183) Fix ConcurrentModificationException inside RMAppAttemptMetrics#convertAtomicLongMaptoLongMap

2018-04-24 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-8183:
-
Summary: Fix ConcurrentModificationException inside 
RMAppAttemptMetrics#convertAtomicLongMaptoLongMap  (was: yClient for Kill 
Application stuck in infinite loop with message "Waiting for Application to be 
killed")

> Fix ConcurrentModificationException inside 
> RMAppAttemptMetrics#convertAtomicLongMaptoLongMap
> 
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8183.1.patch, YARN-8183.2.patch
>
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7900) [AMRMProxy] AMRMClientRelayer for stateful FederationInterceptor

2018-04-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451451#comment-16451451
 ] 

genericqa commented on YARN-7900:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
45s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 45s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m  
7s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m  7s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 13s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 55 unchanged - 0 fixed = 56 total (was 55) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red}  3m 
16s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m  6s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 25s{color} 
| {color:red} hadoop-yarn-server-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 20s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 54s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 49s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}173m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.api.resource.TestPlacementConstraintTransformations |
|   | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
|   | hadoop.yarn.server.resourcemanager.reser

[jira] [Commented] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451444#comment-16451444
 ] 

genericqa commented on YARN-8183:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 18s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 21s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 71m  
1s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}134m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | YARN-8183 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920531/YARN-8183.2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1551dc8d38e6 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9d6befb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20458/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20458/testReport/ |
| Max. process+thread count | 8

[jira] [Comment Edited] (YARN-7939) Yarn Service Upgrade: add support to upgrade a component instance

2018-04-24 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451435#comment-16451435
 ] 

Eric Yang edited comment on YARN-7939 at 4/25/18 12:01 AM:
---

[~csingh] Thank you for patch 10.  It resolved the Invalid event: 
CONTAINER_ALLOCATED issue.  However, I am encountering another issue.  When 
first instance of container completed upgrade.  I continue to issue upgrade for 
second instance of container, I get an error message:

{code}
[hbase@eyang-1 hadoop-3.2.0-SNAPSHOT]$ curl --negotiate -u : -d@/tmp/u2.json -H 
"Content-Type: application/json" -X PUT 
http://eyang-2.openstacklocal:8088/app/v1/services/abc/components/ping/component-instances/ping-1
{"diagnostics":"The upgrade of service abc has not been initiated."}
{code}

The Status of the application shows:

{code}
[hbase@eyang-1 hadoop-3.2.0-SNAPSHOT]$ ./bin/yarn app -status abc
{"name":"abc","id":"application_1524613547912_0001","lifetime":-1,"components":[{"name":"ping","dependencies":[],"resource":{"cpus":1,"memory":"256","additional":{}},"state":"STABLE","configuration":{"properties":{},"env":{},"files":[]},"quicklinks":[],"containers":[{"id":"container_1524613547912_0001_01_03","ip":"172.26.111.21","hostname":"eyang-4.openstacklocal","state":"NEEDS_UPGRADE","launch_time":1524613644367,"bare_host":"eyang-4.openstacklocal","component_instance_name":"ping-1"},{"id":"container_1524613547912_0001_01_04","ip":"172.26.111.21","hostname":"eyang-4.openstacklocal","state":"READY","launch_time":1524613717682,"bare_host":"eyang-4.openstacklocal","component_instance_name":"ping-0"}],"launch_command":"sleep
 
120","number_of_containers":2,"run_privileged_container":false}],"configuration":{"properties":{},"env":{},"files":[]},"state":"STABLE","quicklinks":{},"version":"v1","kerberos_principal":{"principal_name":"hbase/_h...@example.com","keytab":"file:///etc/security/keytabs/hbase.service.keytab"}}
{code}

Th service state is STABLE instead of UPGRADING.  At this point, I can not 
continue the upgrade, or finalize the upgrade.  It appears that AM transition 
logic may set service state to STABLE prematurely.

Additional error check logic is recommended to prevent user from calling 
component instance upgrade when service upgrade has not been triggered.


was (Author: eyang):
[~csingh] Thank you for patch 10.  It resolved the Invalid event: 
CONTAINER_ALLOCATED issue.  However, I am encountering another issue.  When 
first instance of container completed upgrade.  I continue to issue upgrade for 
second instance of container, I get an error message:

{code}
[hbase@eyang-1 hadoop-3.2.0-SNAPSHOT]$ curl --negotiate -u : -d@/tmp/u2.json -H 
"Content-Type: application/json" -X PUT 
http://eyang-2.openstacklocal:8088/app/v1/services/abc/components/ping/component-instances/ping-1
{"diagnostics":"The upgrade of service abc has not been initiated."}
{code}

The Status of the application shows:

{code}
[hbase@eyang-1 hadoop-3.2.0-SNAPSHOT]$ ./bin/yarn app -status abc
{"name":"abc","id":"application_1524613547912_0001","lifetime":-1,"components":[{"name":"ping","dependencies":[],"resource":{"cpus":1,"memory":"256","additional":{}},"state":"STABLE","configuration":{"properties":{},"env":{},"files":[]},"quicklinks":[],"containers":[{"id":"container_1524613547912_0001_01_03","ip":"172.26.111.21","hostname":"eyang-4.openstacklocal","state":"NEEDS_UPGRADE","launch_time":1524613644367,"bare_host":"eyang-4.openstacklocal","component_instance_name":"ping-1"},{"id":"container_1524613547912_0001_01_04","ip":"172.26.111.21","hostname":"eyang-4.openstacklocal","state":"READY","launch_time":1524613717682,"bare_host":"eyang-4.openstacklocal","component_instance_name":"ping-0"}],"launch_command":"sleep
 
120","number_of_containers":2,"run_privileged_container":false}],"configuration":{"properties":{},"env":{},"files":[]},"state":"STABLE","quicklinks":{},"version":"v1","kerberos_principal":{"principal_name":"hbase/_h...@example.com","keytab":"file:///etc/security/keytabs/hbase.service.keytab"}}
{code}

Th service state is STABLE instead of UPGRADING.  At this point, I can not 
continue the upgrade, or finalize the upgrade.

Additional error check logic is recommended to prevent user from calling 
component instance upgrade when service upgrade has not been triggered.

> Yarn Service Upgrade: add support to upgrade a component instance 
> --
>
> Key: YARN-7939
> URL: https://issues.apache.org/jira/browse/YARN-7939
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7939.001.patch, YARN-7939.002.patch, 
> YARN-7939.003.patch, YARN-7939.004.patch, YARN-7939.005.patch, 
> YARN-7939.006.patc

[jira] [Commented] (YARN-7939) Yarn Service Upgrade: add support to upgrade a component instance

2018-04-24 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451435#comment-16451435
 ] 

Eric Yang commented on YARN-7939:
-

[~csingh] Thank you for patch 10.  It resolved the Invalid event: 
CONTAINER_ALLOCATED issue.  However, I am encountering another issue.  When 
first instance of container completed upgrade.  I continue to issue upgrade for 
second instance of container, I get an error message:

{code}
[hbase@eyang-1 hadoop-3.2.0-SNAPSHOT]$ curl --negotiate -u : -d@/tmp/u2.json -H 
"Content-Type: application/json" -X PUT 
http://eyang-2.openstacklocal:8088/app/v1/services/abc/components/ping/component-instances/ping-1
{"diagnostics":"The upgrade of service abc has not been initiated."}
{code}

The Status of the application shows:

{code}
[hbase@eyang-1 hadoop-3.2.0-SNAPSHOT]$ ./bin/yarn app -status abc
{"name":"abc","id":"application_1524613547912_0001","lifetime":-1,"components":[{"name":"ping","dependencies":[],"resource":{"cpus":1,"memory":"256","additional":{}},"state":"STABLE","configuration":{"properties":{},"env":{},"files":[]},"quicklinks":[],"containers":[{"id":"container_1524613547912_0001_01_03","ip":"172.26.111.21","hostname":"eyang-4.openstacklocal","state":"NEEDS_UPGRADE","launch_time":1524613644367,"bare_host":"eyang-4.openstacklocal","component_instance_name":"ping-1"},{"id":"container_1524613547912_0001_01_04","ip":"172.26.111.21","hostname":"eyang-4.openstacklocal","state":"READY","launch_time":1524613717682,"bare_host":"eyang-4.openstacklocal","component_instance_name":"ping-0"}],"launch_command":"sleep
 
120","number_of_containers":2,"run_privileged_container":false}],"configuration":{"properties":{},"env":{},"files":[]},"state":"STABLE","quicklinks":{},"version":"v1","kerberos_principal":{"principal_name":"hbase/_h...@example.com","keytab":"file:///etc/security/keytabs/hbase.service.keytab"}}
{code}

Th service state is STABLE instead of UPGRADING.  At this point, I can not 
continue the upgrade, or finalize the upgrade.

Additional error check logic is recommended to prevent user from calling 
component instance upgrade when service upgrade has not been triggered.

> Yarn Service Upgrade: add support to upgrade a component instance 
> --
>
> Key: YARN-7939
> URL: https://issues.apache.org/jira/browse/YARN-7939
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7939.001.patch, YARN-7939.002.patch, 
> YARN-7939.003.patch, YARN-7939.004.patch, YARN-7939.005.patch, 
> YARN-7939.006.patch, YARN-7939.007.patch, YARN-7939.008.patch, 
> YARN-7939.009.patch, YARN-7939.010.patch, serviceam.log, upgrade_logs.tgz
>
>
> Yarn core supports in-place upgrade of containers. A yarn service can 
> leverage that to provide in-place upgrade of component instances. Please see 
> YARN-7512 for details.
> Will add support to upgrade a single component instance first and then 
> iteratively add other APIs and features.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7939) Yarn Service Upgrade: add support to upgrade a component instance

2018-04-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451434#comment-16451434
 ] 

genericqa commented on YARN-7939:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
22s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  8s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 17s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 27 new + 416 unchanged - 2 fixed = 443 total (was 418) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 18s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 27m 
49s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
21s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}118m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | YARN-7939 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920528/YARN-7939.010.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux aedfe56f8f43 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/L

[jira] [Updated] (YARN-8204) Yarn Service Upgrade: Add a flag to disable upgrade

2018-04-24 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8204:

Attachment: YARN-8204.001.patch

> Yarn Service Upgrade: Add a flag to disable upgrade
> ---
>
> Key: YARN-8204
> URL: https://issues.apache.org/jira/browse/YARN-8204
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-8204.001.patch
>
>
> Add a flag that will enable/disable service upgrade on the cluster. 
> By default it is set to false since upgrade is in early stages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8198) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-04-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451432#comment-16451432
 ] 

genericqa commented on YARN-8198:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 55s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 21 new + 94 unchanged - 0 fixed = 115 total (was 94) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
27s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | YARN-8198 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920514/YARN-8198.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 2c22d8831fae 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9d6befb |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/20454/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/20454/testReport/ |
| Max. process+thread count | 1409 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20454/console |
| Powered by | Apache Yetus 0.8.0-S

[jira] [Commented] (YARN-8196) yarn.webapp.api-service.enable should be highlighted in the quickstart

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451406#comment-16451406
 ] 

Hudson commented on YARN-8196:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14058 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14058/])
YARN-8196.  Updated documentation for enabling YARN Service REST API.
(eyang: rev f64501fcdc9dfa2e9848db0fb4749c6bd4a54d7f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/yarn-service/QuickStart.md


> yarn.webapp.api-service.enable should be highlighted in the quickstart
> --
>
> Key: YARN-8196
> URL: https://issues.apache.org/jira/browse/YARN-8196
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Davide  Vergari
>Assignee: Billie Rinaldi
>Priority: Trivial
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8196.1.patch
>
>
> To let resource manager run long running applications you must set the 
> property yarn.webapp.api-service.enable to true, as described in 
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html.
> By the way, on the documentation, it is indicated only in the REST API 
> section.
> I think it is useful to add this configuration also in the first section of 
> the quick start guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8187) [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page

2018-04-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451390#comment-16451390
 ] 

genericqa commented on YARN-8187:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
36m  9s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | YARN-8187 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920536/YARN-8187.002.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux e74e6ca1553d 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9d6befb |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20459/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page
> ---
>
> Key: YARN-8187
> URL: https://issues.apache.org/jira/browse/YARN-8187
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Zian Chen
>Priority: Critical
> Attachments: Screen Shot 2018-04-24 at 3.54.11 PM.png, 
> YARN-8187.001.patch, YARN-8187.002.patch
>
>
> 1. Click on 'Nodes' Tab in the RM home page
> 2. Click on individual node under 'Node HTTP Address' 
> 3. No breadcrums available like  '/Home/Nodes/Node Id/ 
> 4. breadcums comes back once we click on other tabs like 'List of 
> Applications', 'List of Containers'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8196) yarn.webapp.api-service.enable should be highlighted in the quickstart

2018-04-24 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-8196:

Fix Version/s: 3.1.1

> yarn.webapp.api-service.enable should be highlighted in the quickstart
> --
>
> Key: YARN-8196
> URL: https://issues.apache.org/jira/browse/YARN-8196
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Davide  Vergari
>Assignee: Billie Rinaldi
>Priority: Trivial
> Fix For: 3.2.0, 3.1.1
>
> Attachments: YARN-8196.1.patch
>
>
> To let resource manager run long running applications you must set the 
> property yarn.webapp.api-service.enable to true, as described in 
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html.
> By the way, on the documentation, it is indicated only in the REST API 
> section.
> I think it is useful to add this configuration also in the first section of 
> the quick start guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8200) Backport resource types/GPU features to branch-2

2018-04-24 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451385#comment-16451385
 ] 

Wangda Tan commented on YARN-8200:
--

+1 to have a branch for this which we can easier know which patches got 
backported.

> Backport resource types/GPU features to branch-2
> 
>
> Key: YARN-8200
> URL: https://issues.apache.org/jira/browse/YARN-8200
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
>
> Currently we have a need for GPU scheduling on our YARN clusters to support 
> deep learning workloads. However, our main production clusters are running 
> older versions of branch-2 (2.7 in our case). To prevent supporting too many 
> very different hadoop versions across multiple clusters, we would like to 
> backport the resource types/resource profiles feature to branch-2, as well as 
> the GPU specific support.
>  
> We have done a trial backport of YARN-3926 and some miscellaneous patches in 
> YARN-7069 based on issues we uncovered, and the backport was fairly smooth. 
> We also did a trial backport of most of YARN-6223 (sans docker support).
>  
> Regarding the backports, perhaps we can do the development in a feature 
> branch and then merge to branch-2 when ready.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8200) Backport resource types/GPU features to branch-2

2018-04-24 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451384#comment-16451384
 ] 

Konstantin Shvachko commented on YARN-8200:
---

What people think if we create a branch so that Jonathan could apply his work 
on the backporting?
That way we can make this discussion more material.
Also you guys will be able to try it and see if it fits your requirements.

> Backport resource types/GPU features to branch-2
> 
>
> Key: YARN-8200
> URL: https://issues.apache.org/jira/browse/YARN-8200
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
>
> Currently we have a need for GPU scheduling on our YARN clusters to support 
> deep learning workloads. However, our main production clusters are running 
> older versions of branch-2 (2.7 in our case). To prevent supporting too many 
> very different hadoop versions across multiple clusters, we would like to 
> backport the resource types/resource profiles feature to branch-2, as well as 
> the GPU specific support.
>  
> We have done a trial backport of YARN-3926 and some miscellaneous patches in 
> YARN-7069 based on issues we uncovered, and the backport was fairly smooth. 
> We also did a trial backport of most of YARN-6223 (sans docker support).
>  
> Regarding the backports, perhaps we can do the development in a feature 
> branch and then merge to branch-2 when ready.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8205) AM launching is delayed, then state is not updated in ATS

2018-04-24 Thread Sumana Sathish (JIRA)
Sumana Sathish created YARN-8205:


 Summary: AM launching is delayed, then state is not updated in ATS
 Key: YARN-8205
 URL: https://issues.apache.org/jira/browse/YARN-8205
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Sumana Sathish
Assignee: Rohith Sharma K S


There is momentarily issue between app ACCEPTED to RUNNING duration. If AM 
launching is delayed, then state is not updated in ATS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8187) [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page

2018-04-24 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451372#comment-16451372
 ] 

Zian Chen commented on YARN-8187:
-

The screenshot shows how the UI looks like after the latest patch.

> [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page
> ---
>
> Key: YARN-8187
> URL: https://issues.apache.org/jira/browse/YARN-8187
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Zian Chen
>Priority: Critical
> Attachments: Screen Shot 2018-04-24 at 3.54.11 PM.png, 
> YARN-8187.001.patch, YARN-8187.002.patch
>
>
> 1. Click on 'Nodes' Tab in the RM home page
> 2. Click on individual node under 'Node HTTP Address' 
> 3. No breadcrums available like  '/Home/Nodes/Node Id/ 
> 4. breadcums comes back once we click on other tabs like 'List of 
> Applications', 'List of Containers'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8187) [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page

2018-04-24 Thread Zian Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen updated YARN-8187:

Attachment: Screen Shot 2018-04-24 at 3.54.11 PM.png

> [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page
> ---
>
> Key: YARN-8187
> URL: https://issues.apache.org/jira/browse/YARN-8187
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Zian Chen
>Priority: Critical
> Attachments: Screen Shot 2018-04-24 at 3.54.11 PM.png, 
> YARN-8187.001.patch, YARN-8187.002.patch
>
>
> 1. Click on 'Nodes' Tab in the RM home page
> 2. Click on individual node under 'Node HTTP Address' 
> 3. No breadcrums available like  '/Home/Nodes/Node Id/ 
> 4. breadcums comes back once we click on other tabs like 'List of 
> Applications', 'List of Containers'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8204) Yarn Service Upgrade: Add a flag to disable upgrade

2018-04-24 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-8204:
---

 Summary: Yarn Service Upgrade: Add a flag to disable upgrade
 Key: YARN-8204
 URL: https://issues.apache.org/jira/browse/YARN-8204
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Chandni Singh
Assignee: Chandni Singh


Add a flag that will enable/disable service upgrade on the cluster. 
By default it is set to false since upgrade is in early stages.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2674) Distributed shell AM may re-launch containers if RM work preserving restart happens

2018-04-24 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451359#comment-16451359
 ] 

Billie Rinaldi commented on YARN-2674:
--

Hey [~shaneku...@gmail.com], thanks for taking up this patch. It looks like a 
pretty straightforward improvement. I think one additional thing we should do 
is have the AM release the surplus containers.

> Distributed shell AM may re-launch containers if RM work preserving restart 
> happens
> ---
>
> Key: YARN-2674
> URL: https://issues.apache.org/jira/browse/YARN-2674
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Chun Chen
>Assignee: Shane Kumpf
>Priority: Major
>  Labels: oct16-easy
> Attachments: YARN-2674.1.patch, YARN-2674.2.patch, YARN-2674.3.patch, 
> YARN-2674.4.patch, YARN-2674.5.patch, YARN-2674.6.patch
>
>
> Currently, if RM work preserving restart happens while distributed shell is 
> running, distribute shell AM may re-launch all the containers, including 
> new/running/complete. We must make sure it won't re-launch the 
> running/complete containers.
> We need to remove allocated containers from 
> AMRMClientImpl#remoteRequestsTable once AM receive them from RM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8196) yarn.webapp.api-service.enable should be highlighted in the quickstart

2018-04-24 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451342#comment-16451342
 ] 

genericqa commented on YARN-8196:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
32m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f |
| JIRA Issue | YARN-8196 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12920500/YARN-8196.1.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 47b2028f4a12 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9d6befb |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 441 (vs. ulimit of 1) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/20456/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> yarn.webapp.api-service.enable should be highlighted in the quickstart
> --
>
> Key: YARN-8196
> URL: https://issues.apache.org/jira/browse/YARN-8196
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.1.0
>Reporter: Davide  Vergari
>Assignee: Billie Rinaldi
>Priority: Trivial
> Attachments: YARN-8196.1.patch
>
>
> To let resource manager run long running applications you must set the 
> property yarn.webapp.api-service.enable to true, as described in 
> http://hadoop.apache.org/docs/r3.1.0/hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html.
> By the way, on the documentation, it is indicated only in the REST API 
> section.
> I think it is useful to add this configuration also in the first section of 
> the quick start guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8187) [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page

2018-04-24 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451341#comment-16451341
 ] 

Zian Chen edited comment on YARN-8187 at 4/24/18 10:30 PM:
---

The root cause for this issue is we didn't specify templates for yarn-node.hbs 
under webapp/app/templates dir.

Update the patch v2.


was (Author: zian chen):
Update the patch v2.

> [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page
> ---
>
> Key: YARN-8187
> URL: https://issues.apache.org/jira/browse/YARN-8187
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Zian Chen
>Priority: Critical
> Attachments: YARN-8187.001.patch, YARN-8187.002.patch
>
>
> 1. Click on 'Nodes' Tab in the RM home page
> 2. Click on individual node under 'Node HTTP Address' 
> 3. No breadcrums available like  '/Home/Nodes/Node Id/ 
> 4. breadcums comes back once we click on other tabs like 'List of 
> Applications', 'List of Containers'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8187) [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page

2018-04-24 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451341#comment-16451341
 ] 

Zian Chen commented on YARN-8187:
-

Update the patch v2.

> [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page
> ---
>
> Key: YARN-8187
> URL: https://issues.apache.org/jira/browse/YARN-8187
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Zian Chen
>Priority: Critical
> Attachments: YARN-8187.001.patch, YARN-8187.002.patch
>
>
> 1. Click on 'Nodes' Tab in the RM home page
> 2. Click on individual node under 'Node HTTP Address' 
> 3. No breadcrums available like  '/Home/Nodes/Node Id/ 
> 4. breadcums comes back once we click on other tabs like 'List of 
> Applications', 'List of Containers'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8187) [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page

2018-04-24 Thread Zian Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen updated YARN-8187:

Attachment: YARN-8187.002.patch

> [UI2] clicking on Individual Nodes does not contain breadcums in Nodes Page
> ---
>
> Key: YARN-8187
> URL: https://issues.apache.org/jira/browse/YARN-8187
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn-ui-v2
>Reporter: Sumana Sathish
>Assignee: Zian Chen
>Priority: Critical
> Attachments: YARN-8187.001.patch, YARN-8187.002.patch
>
>
> 1. Click on 'Nodes' Tab in the RM home page
> 2. Click on individual node under 'Node HTTP Address' 
> 3. No breadcrums available like  '/Home/Nodes/Node Id/ 
> 4. breadcums comes back once we click on other tabs like 'List of 
> Applications', 'List of Containers'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-24 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451339#comment-16451339
 ] 

Wangda Tan commented on YARN-8183:
--

Thanks [~suma.shivaprasad], +1, pending Jenkins.

> yClient for Kill Application stuck in infinite loop with message "Waiting for 
> Application to be killed"
> ---
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8183.1.patch, YARN-8183.2.patch
>
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8194) Exception when reinitializing a container using LinuxContainerExecutor

2018-04-24 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-8194:

Summary: Exception when reinitializing a container using 
LinuxContainerExecutor  (was: Yarn Service Upgrade: Exception when 
reinitializing a container using LinuxContainerExecutor)

> Exception when reinitializing a container using LinuxContainerExecutor
> --
>
> Key: YARN-8194
> URL: https://issues.apache.org/jira/browse/YARN-8194
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> When a component instance is upgraded and the container executor is set to 
> {{org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor}}, then 
> the following exception is seen in the nodemanager:
> {code}
> Writing to cgroup task files...
> Creating local dirs...
> Can't open 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
>  for output - File exists
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/local/hadoop-3.2.0-SNAPSHOT/bin/container-executor, hbase, hbase, 1, 
> application_1524242413029_0001, container_1524242413029_0001_01_02, 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.tokens,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.pid,
>  /tmp/hadoop-yarn/nm-local-dir, 
> /usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs, cgroups=none]
> 2018-04-20 16:50:16,641 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime:
>  Launch container failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:118)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.handleLaunchForLaunchType(LinuxContainerExecutor.java:562)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:477)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:492)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
> at org.apache.hadoop.util.Shell.run(Shell.java:902)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152)
> ... 11 more
> 2018-04-20 16:50:16,642

[jira] [Assigned] (YARN-8194) Yarn Service Upgrade: Exception when reinitializing a container using LinuxContainerExecutor

2018-04-24 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh reassigned YARN-8194:
---

Assignee: Chandni Singh

> Yarn Service Upgrade: Exception when reinitializing a container using 
> LinuxContainerExecutor
> 
>
> Key: YARN-8194
> URL: https://issues.apache.org/jira/browse/YARN-8194
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
>
> When a component instance is upgraded and the container executor is set to 
> {{org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor}}, then 
> the following exception is seen in the nodemanager:
> {code}
> Writing to cgroup task files...
> Creating local dirs...
> Can't open 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
>  for output - File exists
> Getting exit code file...
> Creating script paths...
> Full command array for failed execution:
> [/usr/local/hadoop-3.2.0-SNAPSHOT/bin/container-executor, hbase, hbase, 1, 
> application_1524242413029_0001, container_1524242413029_0001_01_02, 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.tokens,
>  
> /tmp/hadoop-yarn/nm-local-dir/nmPrivate/application_1524242413029_0001/container_1524242413029_0001_01_02/container_1524242413029_0001_01_02.pid,
>  /tmp/hadoop-yarn/nm-local-dir, 
> /usr/local/hadoop-3.2.0-SNAPSHOT/logs/userlogs, cgroups=none]
> 2018-04-20 16:50:16,641 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime:
>  Launch container failed. Exception:
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationException:
>  ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:180)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DefaultLinuxContainerRuntime.launchContainer(DefaultLinuxContainerRuntime.java:118)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DelegatingLinuxContainerRuntime.launchContainer(DelegatingLinuxContainerRuntime.java:141)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.handleLaunchForLaunchType(LinuxContainerExecutor.java:562)
> at 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:477)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.launchContainer(ContainerLaunch.java:492)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:304)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:101)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: ExitCodeException exitCode=33: Could not create copy file 3 
> /tmp/hadoop-yarn/nm-local-dir/usercache/hbase/appcache/application_1524242413029_0001/container_1524242413029_0001_01_02/launch_container.sh
> Could not create local files and directories
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:1009)
> at org.apache.hadoop.util.Shell.run(Shell.java:902)
> at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1227)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.privileged.PrivilegedOperationExecutor.executePrivilegedOperation(PrivilegedOperationExecutor.java:152)
> ... 11 more
> 2018-04-20 16:50:16,642 WARN 
> org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor: Exit code 
> from container co

[jira] [Commented] (YARN-7781) Update YARN-Services-Examples.md to be in sync with the latest code

2018-04-24 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451306#comment-16451306
 ] 

Gour Saha commented on YARN-7781:
-

[~eyang] I filed YARN-8203 to update the examples to use 
centos/httpd-24-centos7.

> Update YARN-Services-Examples.md to be in sync with the latest code
> ---
>
> Key: YARN-7781
> URL: https://issues.apache.org/jira/browse/YARN-7781
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-7781.01.patch, YARN-7781.02.patch, 
> YARN-7781.03.patch, YARN-7781.04.patch, YARN-7781.05.patch, YARN-7781.06.patch
>
>
> Update YARN-Services-Examples.md to make the following additions/changes:
> 1. Add an additional URL and PUT Request JSON to support flex:
> Update to flex up/down the no of containers (instances) of a component of a 
> service
> PUT URL – http://localhost:8088/app/v1/services/hello-world
> PUT Request JSON
> {code}
> {
>   "components" : [ {
> "name" : "hello",
> "number_of_containers" : 3
>   } ]
> }
> {code}
> 2. Modify all occurrences of /ws/ to /app/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8203) Provide functional artifact & launch_command in examples of YARN-Services-Examples.md

2018-04-24 Thread Gour Saha (JIRA)
Gour Saha created YARN-8203:
---

 Summary: Provide functional artifact & launch_command in examples 
of YARN-Services-Examples.md
 Key: YARN-8203
 URL: https://issues.apache.org/jira/browse/YARN-8203
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gour Saha


In YARN-7781, [~eyang] made the following suggestion. I think it is a good one 
and should be addressed.

Copying Eric's comment verbatim -
The examples are showing nginx, but nginx does not work until YARN-7654 is 
committed because nginx depends on ENTRY_POINT support and run privileged 
container. It would be good to change the example to use 
centos:httpd-24-centos7, and launch_command: /usr/bin/run-httpd for functional 
examples.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-24 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451303#comment-16451303
 ] 

Suma Shivaprasad commented on YARN-8183:


Thanks [~leftnoteasy] Attached patch with review comments fixed.

> yClient for Kill Application stuck in infinite loop with message "Waiting for 
> Application to be killed"
> ---
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8183.1.patch, YARN-8183.2.patch
>
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-24 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8183:
---
Attachment: YARN-8183.2.patch

> yClient for Kill Application stuck in infinite loop with message "Waiting for 
> Application to be killed"
> ---
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8183.1.patch, YARN-8183.2.patch
>
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-24 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8183:
---
Attachment: (was: YARN-8183.1.patch)

> yClient for Kill Application stuck in infinite loop with message "Waiting for 
> Application to be killed"
> ---
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8183.1.patch
>
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8183) yClient for Kill Application stuck in infinite loop with message "Waiting for Application to be killed"

2018-04-24 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-8183:
---
Attachment: YARN-8183.1.patch

> yClient for Kill Application stuck in infinite loop with message "Waiting for 
> Application to be killed"
> ---
>
> Key: YARN-8183
> URL: https://issues.apache.org/jira/browse/YARN-8183
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Sumana Sathish
>Assignee: Suma Shivaprasad
>Priority: Critical
> Attachments: YARN-8183.1.patch
>
>
> yclient gets stuck in killing application with repeatedly printing following 
> message
> {code}
> INFO impl.YarnClientImpl: Waiting for application 
> application_1523604760756_0001 to be killed.{code}
> RM shows following exception
> {code}
>  ERROR resourcemanager.ResourceManager (ResourceManager.java:handle(995)) - 
> Error in handling event type APP_UPDATE_SAVED for application application_ID
> java.util.ConcurrentModificationException
> at java.util.HashMap$HashIterator.nextNode(HashMap.java:1442)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1476)
> at java.util.HashMap$EntryIterator.next(HashMap.java:1474)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.convertAtomicLongMaptoLongMap(RMAppAttemptMetrics.java:212)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptMetrics.getAggregateAppResourceUsage(RMAppAttemptMetrics.java:133)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.getRMAppMetrics(RMAppImpl.java:1660)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.TimelineServiceV2Publisher.appFinished(TimelineServiceV2Publisher.java:178)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.metrics.CombinedSystemMetricsPublisher.appFinished(CombinedSystemMetricsPublisher.java:73)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalTransition.transition(RMAppImpl.java:1470)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1408)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$AppKilledTransition.transition(RMAppImpl.java:1400)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1177)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl$FinalStateSavedTransition.transition(RMAppImpl.java:1164)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$500(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:487)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:898)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:993)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:977)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:197)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:126)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7939) Yarn Service Upgrade: add support to upgrade a component instance

2018-04-24 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451279#comment-16451279
 ] 

Chandni Singh commented on YARN-7939:
-

[~eyang] patch 10 handles the fixable checkstyle warnings and error handling 
that you pointed out.

> Yarn Service Upgrade: add support to upgrade a component instance 
> --
>
> Key: YARN-7939
> URL: https://issues.apache.org/jira/browse/YARN-7939
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7939.001.patch, YARN-7939.002.patch, 
> YARN-7939.003.patch, YARN-7939.004.patch, YARN-7939.005.patch, 
> YARN-7939.006.patch, YARN-7939.007.patch, YARN-7939.008.patch, 
> YARN-7939.009.patch, YARN-7939.010.patch, serviceam.log, upgrade_logs.tgz
>
>
> Yarn core supports in-place upgrade of containers. A yarn service can 
> leverage that to provide in-place upgrade of component instances. Please see 
> YARN-7512 for details.
> Will add support to upgrade a single component instance first and then 
> iteratively add other APIs and features.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7939) Yarn Service Upgrade: add support to upgrade a component instance

2018-04-24 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7939:

Attachment: YARN-7939.010.patch

> Yarn Service Upgrade: add support to upgrade a component instance 
> --
>
> Key: YARN-7939
> URL: https://issues.apache.org/jira/browse/YARN-7939
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Attachments: YARN-7939.001.patch, YARN-7939.002.patch, 
> YARN-7939.003.patch, YARN-7939.004.patch, YARN-7939.005.patch, 
> YARN-7939.006.patch, YARN-7939.007.patch, YARN-7939.008.patch, 
> YARN-7939.009.patch, YARN-7939.010.patch, serviceam.log, upgrade_logs.tgz
>
>
> Yarn core supports in-place upgrade of containers. A yarn service can 
> leverage that to provide in-place upgrade of component instances. Please see 
> YARN-7512 for details.
> Will add support to upgrade a single component instance first and then 
> iteratively add other APIs and features.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8004) Add unit tests for inter queue preemption for dominant resource calculator

2018-04-24 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451226#comment-16451226
 ] 

Eric Payne commented on YARN-8004:
--

bq. So when we set the last parameter to true, we will use DRF calculator.
[~Zian Chen], yes, I see that now. Thanks for pointing it out. I was strictly 
looking at the value of {{yarn.scheduler.capacity.resource-calculator}} at the 
point when {{editSchedule()}} is called.

> Add unit tests for inter queue preemption for dominant resource calculator
> --
>
> Key: YARN-8004
> URL: https://issues.apache.org/jira/browse/YARN-8004
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sumana Sathish
>Assignee: Zian Chen
>Priority: Critical
> Attachments: YARN-8004.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8122) Component health threshold monitor

2018-04-24 Thread Gour Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gour Saha updated YARN-8122:

Attachment: YARN-8122.006.patch

> Component health threshold monitor
> --
>
> Key: YARN-8122
> URL: https://issues.apache.org/jira/browse/YARN-8122
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-8122.001.patch, YARN-8122.002.patch, 
> YARN-8122.003.patch, YARN-8122.004.patch, YARN-8122.005.patch, 
> YARN-8122.006.patch, YARN-8122.draft.patch, YARN-8122.test.json, 
> YARN-8122.test.log
>
>
> Slider supported component health threshold monitoring with SLIDER-1246. It 
> would be good to have this feature for YARN Service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8122) Component health threshold monitor

2018-04-24 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451220#comment-16451220
 ] 

Gour Saha commented on YARN-8122:
-

[~billie.rinaldi] I agree that ready state is a better measure of health. Also, 
given that YARN-8060 allows you to disable default readiness check it is easy 
to simulate running state as a health measure instead of ready.

I am uploading patch 006 with changes based on this feedback.

[~eyang], I think the problem you are facing with your service will also go 
away if you use patch 006. Can you please try it and let me know?

> Component health threshold monitor
> --
>
> Key: YARN-8122
> URL: https://issues.apache.org/jira/browse/YARN-8122
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Gour Saha
>Priority: Major
> Attachments: YARN-8122.001.patch, YARN-8122.002.patch, 
> YARN-8122.003.patch, YARN-8122.004.patch, YARN-8122.005.patch, 
> YARN-8122.draft.patch, YARN-8122.test.json, YARN-8122.test.log
>
>
> Slider supported component health threshold monitoring with SLIDER-1246. It 
> would be good to have this feature for YARN Service too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7253) Shared Cache Manager daemon command listed as admin subcmd in yarn script

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451032#comment-16451032
 ] 

Hudson commented on YARN-7253:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-7253. Shared Cache Manager daemon command listed as admin subcmd in (xyao: 
rev 7b890a09b547007c65282b22d7ebb2750f2c77a1)
* (edit) hadoop-yarn-project/hadoop-yarn/bin/yarn


> Shared Cache Manager daemon command listed as admin subcmd in yarn script
> -
>
> Key: YARN-7253
> URL: https://issues.apache.org/jira/browse/YARN-7253
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha4
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Trivial
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-7253-trunk-001.patch
>
>
> Currently the command to start the shared cache manager daemon is listed as 
> an admin command in the yarn script usage:
> {noformat}
>   SUBCOMMAND is one of:
> Admin Commands:
> daemonlogget/set the log level for each daemon
> node prints node report(s)
> rmadmin  admin tools
> scmadmin SharedCacheManager admin tools
> sharedcachemanager   run the SharedCacheManager daemon
> {noformat}
> It should be a daemon command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7257) AggregatedLogsBlock reports a bad 'end' value as a bad 'start' value

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451033#comment-16451033
 ] 

Hudson commented on YARN-7257:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-7257. AggregatedLogsBlock reports a bad 'end' value as a bad (xyao: rev 
29ea7053656cc5eefdfcc34679a13229ab4d60a5)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/log/AggregatedLogsBlock.java


> AggregatedLogsBlock reports a bad 'end' value as a bad 'start' value
> 
>
> Key: YARN-7257
> URL: https://issues.apache.org/jira/browse/YARN-7257
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 2.9.0, 3.0.0-beta1
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
> Fix For: 2.9.0, 3.0.0-beta1, 3.1.0
>
> Attachments: MAPREDUCE-6969.001.patch
>
>
> TestHSWebApp has been failing recently:
> {noformat}
> Running org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp
> Tests run: 17, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.57 sec <<< 
> FAILURE! - in org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp
> testLogsViewBadStartEnd(org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp)
>   Time elapsed: 0.076 sec  <<< FAILURE!
> org.mockito.exceptions.verification.junit.ArgumentsAreDifferent: 
> Argument(s) are different! Wanted:
> printWriter.write(
> "Invalid log end value: bar"
> );
> -> at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp.testLogsViewBadStartEnd(TestHSWebApp.java:261)
> Actual invocation has different arguments:
> printWriter.write(
> " "http://www.w3.org/TR/html4/strict.dtd";>"
> );
> -> at 
> org.apache.hadoop.yarn.webapp.view.TextView.echoWithoutEscapeHtml(TextView.java:62)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>   at 
> org.apache.hadoop.mapreduce.v2.hs.webapp.TestHSWebApp.testLogsViewBadStartEnd(TestHSWebApp.java:261)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7084) TestSchedulingMonitor#testRMStarts fails sporadically

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451042#comment-16451042
 ] 

Hudson commented on YARN-7084:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-7084. TestSchedulingMonitor#testRMStarts fails sporadically. (xyao: rev 
057e6c370f21acd385b94ddc819bda009209f090)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/TestSchedulingMonitor.java


> TestSchedulingMonitor#testRMStarts fails sporadically
> -
>
> Key: YARN-7084
> URL: https://issues.apache.org/jira/browse/YARN-7084
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Major
> Fix For: 2.9.0, 2.8.3, 2.7.5, 3.0.0, 3.1.0
>
> Attachments: YARN-7084.001.patch
>
>
> TestSchedulingMonitor has been failing sporadically in precommit builds.  
> Failures look like this:
> {noformat}
> Running 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor
> Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.802 sec <<< 
> FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor
> testRMStarts(org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor)
>   Time elapsed: 1.728 sec  <<< FAILURE!
> org.mockito.exceptions.verification.WantedButNotInvoked: 
> Wanted but not invoked:
> schedulingEditPolicy.editSchedule();
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor.testRMStarts(TestSchedulingMonitor.java:58)
> However, there were other interactions with this mock:
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.(SchedulingMonitor.java:50)
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:61)
> -> at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.SchedulingMonitor.serviceInit(SchedulingMonitor.java:62)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.monitor.TestSchedulingMonitor.testRMStarts(TestSchedulingMonitor.java:58)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7250) Update Shared cache client api to use URLs

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451040#comment-16451040
 ] 

Hudson commented on YARN-7250:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-7250. Update Shared cache client api to use URLs. (xyao: rev 
5f494fc3d9ecbedc8999aded3acb4b9aebc9c61c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestSharedCacheClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/SharedCacheClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/SharedCacheClient.java


> Update Shared cache client api to use URLs
> --
>
> Key: YARN-7250
> URL: https://issues.apache.org/jira/browse/YARN-7250
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
>Priority: Minor
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-7250-trunk-001.patch
>
>
> We should make the SharedCacheClient use api more consistent with other YARN 
> api methods. We can do this by doing two things:
> # Update the SharedCacheClient#use api so that it returns a URL instead of a 
> Path. Currently yarn developers have to convert the path to a URL when 
> creating a LocalResources. It would be much smoother if they could just use a 
> URL passed to them by the shared cache client.
> # Remove the portion of the client that deals with fragments as this is not 
> consistent with the rest of YARN. This functionality is bleeding in from the 
> MapReduce layer, which uses fragments to keep track of destination file 
> names. YARN's api does not use fragments. Instead  the ContainerLaunchContext 
> expects a Map localResources, where the strings are 
> the destination file names. We should let the YARN application handle 
> destination file names however it wants instead of pushing this into the 
> shared cache api. Additionally, fragments are a clunky way to handle this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6623) Add support to turn off launching privileged containers in the container-executor

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451041#comment-16451041
 ] 

Hudson commented on YARN-6623:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6623. Add support to turn off launching privileged containers in (xyao: 
rev 96afa69716bf1685fb8f2776ef61edfa4cd3a77f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/common/module-configs.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerPullCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerCommandExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerLoadCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test_util.cc
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/configuration.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/TestDockerRunCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/DockerContainers.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/utils/test-string-utils.cc
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerClient.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerStopCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/container-executor.h
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/TestDockerContainerRuntime.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerLoadCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerInspectCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRunCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/modules/common/module-configs.c
* (edit) hadoop-yarn-project/hadoop-yarn/conf/container-executor.cfg
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/docker/DockerRmCommand.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/test/test-container-executor.c
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/

[jira] [Commented] (YARN-6550) Capture launch_container.sh logs to a separate log file

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451046#comment-16451046
 ] 

Hudson commented on YARN-6550:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6550. Capture launch_container.sh logs to a separate log file. (xyao: rev 
ae5e4aac0acd60f3045b62e8d0d107f709761275)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/ContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/LinuxContainerExecutor.java


> Capture launch_container.sh logs to a separate log file
> ---
>
> Key: YARN-6550
> URL: https://issues.apache.org/jira/browse/YARN-6550
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Wangda Tan
>Assignee: Suma Shivaprasad
>Priority: Major
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-6550.002.patch, YARN-6550.003.patch, 
> YARN-6550.005.patch, YARN-6550.006.patch, YARN-6550.007.patch, 
> YARN-6550.008.patch, YARN-6550.009.patch, YARN-6550.010.patch, 
> YARN-6550.011.patch, YARN-6550.011.patch, YARN-6550.012.patch, 
> YARN-6550.branch-2.001.patch, YARN-6550.patch
>
>
> launch_container.sh which generated by NM will do a bunch of things (like 
> create link, etc.) while launch a process. No logs captured until {{exec}} is 
> called. We need capture all failures of launch_container.sh for easier 
> troubleshooting.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7248) NM returns new SCHEDULED container status to older clients

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451037#comment-16451037
 ] 

Hudson commented on YARN-7248:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-7248. NM returns new SCHEDULED container status to older clients. (xyao: 
rev badb0b59c4ac5de0236d15e9667c943ece4387b5)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerState.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerShutdown.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerQueuing.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerStatus.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerSubState.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestEventFlow.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ContainerStatusPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ContainerState.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/api/records/impl/pb/ProtoUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestResourceTrackerService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto


> NM returns new SCHEDULED container status to older clients
> --
>
> Key: YARN-7248
> URL: https://issues.apache.org/jira/browse/YARN-7248
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Jason Lowe
>Assignee: Arun Suresh
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-7248.001.patch, YARN-7248.002.patch, 
> YARN-7248.003.patch
>
>
> YARN-4597 added a new SCHEDULED container state and that state is returned to 
> clients when the container is localizing, etc.  However the client may be 
> running on an older software version that does not have the new SCHEDULED 
> state which could lead the client to crash on the unexpected container state 
> value or make incorrect assumptions like any state != NEW and != RUNNING must 
> be COMPLETED which was true in the older version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6871) Add additional deSelects params in RMWebServices#getAppReport

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451031#comment-16451031
 ] 

Hudson commented on YARN-6871:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6871. Add additional deSelects params in (xyao: rev 
9b47333ef0fe147bfa54e80600066cfd2d3f1fc3)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/DeSelectFields.java


> Add additional deSelects params in RMWebServices#getAppReport
> -
>
> Key: YARN-6871
> URL: https://issues.apache.org/jira/browse/YARN-6871
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: resourcemanager, router
>Reporter: Giovanni Matteo Fumarola
>Assignee: Tanuj Nayak
>Priority: Major
> Fix For: 2.9.0, 3.0.0, 3.1.0
>
> Attachments: YARN-6871-branch-2.v1.patch, 
> YARN-6871-branch-2.v2.patch, YARN-6871-branch-2.v3.patch, 
> YARN-6871.002.patch, YARN-6871.003.patch, YARN-6871.004.patch, 
> YARN-6871.005.patch, YARN-6871.006.patch, YARN-6871.007.patch, 
> YARN-6871.008.patch, YARN-6871.009.patch, YARN-6871.proto.patch
>
>
> This jira tracks the effort to add additional deSelect params to the 
> GetAppReport to make it lighter and faster.
> With the current one we are facing a scalability issues.
> E.g. with ~500 applications running the AppReport can reach up to 300MB in 
> size due to the {{ResourceRequest}} in the {{AppInfo}}.
> Yarn RM will return the new result faster and it will use less compute cycles 
> to create the report and it will improve the YARN RM and Client's 
> performances.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6509) Add a size threshold beyond which yarn logs will require a force option

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451047#comment-16451047
 ] 

Hudson commented on YARN-6509:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6509. Add a size threshold beyond which yarn logs will require a (xyao: 
rev 2dac9f8995ec497cfc4d0e165439d806094ef68e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestLogsCLI.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/LogCLIHelpers.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/filecontroller/tfile/LogAggregationTFileController.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java


> Add a size threshold beyond which yarn logs will require a force option
> ---
>
> Key: YARN-6509
> URL: https://issues.apache.org/jira/browse/YARN-6509
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Xuan Gong
>Priority: Major
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-6509.1.patch, YARN-6509.2.patch, YARN-6509.3.patch, 
> YARN-6509.4.patch, YARN-6509.5.patch
>
>
> An accidental fetch for a long running application can lead to scenario which 
> the large size of log can fill up a disk.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6333) Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and fairSharePreemptionThreshold

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451048#comment-16451048
 ] 

Hudson commented on YARN-6333:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6333. Improve doc for minSharePreemptionTimeout, (xyao: rev 
dcf5c91942034f24553f8ff58d8b62fc9e0fdfc2)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/FairScheduler.md


> Improve doc for minSharePreemptionTimeout, fairSharePreemptionTimeout and 
> fairSharePreemptionThreshold
> --
>
> Key: YARN-6333
> URL: https://issues.apache.org/jira/browse/YARN-6333
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Chetna Chaudhari
>Priority: Major
>  Labels: newbie++
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-6333-1.patch, YARN-6333-2.patch
>
>
> Default values of them are not mentioned in doc. For example, default value 
> of minSharePreemptionTimeout is {{Long.MAX_VALUE}}, which means the min share 
> preemption won't happen until you set a meaningful value. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6962) Add support for updateContainers when allocating using FederationInterceptor

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16451038#comment-16451038
 ] 

Hudson commented on YARN-6962:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6962. Add support for updateContainers when allocating using (xyao: rev 
f90381ea3e595e6a47136e3747dda69fd4d30780)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java


> Add support for updateContainers when allocating using FederationInterceptor
> 
>
> Key: YARN-6962
> URL: https://issues.apache.org/jira/browse/YARN-6962
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 2.9.0, 3.1.0
>
> Attachments: YARN-6962.v1.patch, YARN-6962.v2.patch
>
>
> Container update is introduced in YARN-5221. Federation Interceptor needs to 
> support it when splitting (merging) the allocate request (response).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7152) [ATSv2] Registering timeline client before AMRMClient service init throw exception.

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450971#comment-16450971
 ] 

Hudson commented on YARN-7152:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-7152. [ATSv2] Registering timeline client before AMRMClient service (xyao: 
rev 6e0d50d6ae4e396f3efe42589faf70454c200d14)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/AMRMClient.java


> [ATSv2] Registering timeline client before AMRMClient service init throw 
> exception. 
> 
>
> Key: YARN-7152
> URL: https://issues.apache.org/jira/browse/YARN-7152
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineclient
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.0.0-beta1, YARN-5355_branch2, 3.1.0
>
> Attachments: YARN-7152.01.patch, YARN-7152.02.patch
>
>
> It is observed that registering timeline v2 client with AMRMClient before 
> serviceInit throw exception in AppMaster. This causes AppMaster start up 
> failure.
> {noformat}
> Caused by: org.apache.hadoop.yarn.exceptions.YarnException: register timeline 
> v2 client when not configured.
>   at 
> org.apache.hadoop.yarn.client.api.AMRMClient.registerTimelineV2Client(AMRMClient.java:708)
>   at 
> org.apache.hadoop.yarn.client.api.async.AMRMClientAsync.registerTimelineV2Client(AMRMClientAsync.java:354)
> {noformat}
> AMRMClient should not assume that timeline client will be registered only 
> after serviceInit. In composite service model, this will be a issue. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7022) Improve click interaction in queue topology in new YARN UI

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450972#comment-16450972
 ] 

Hudson commented on YARN-7022:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-7022. Improve click interaction in queue topology in new YARN UI. (xyao: 
rev 3b1c4e44942e14ddc4b58756c09ccfb783ebc16c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/tree-selector.js


> Improve click interaction in queue topology in new YARN UI
> --
>
> Key: YARN-7022
> URL: https://issues.apache.org/jira/browse/YARN-7022
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-ui-v2
>Reporter: Abdullah Yousufi
>Assignee: Abdullah Yousufi
>Priority: Major
> Fix For: 3.0.0-beta1, 3.1.0
>
> Attachments: YARN-7022.001.patch
>
>
> Currently, the behavior of interacting with the tree view in the queues tab 
> of the UI is difficult in that you must mouse over to select a queue node and 
> then click to drill down. It would be more intuitive to single click to 
> select a different queue and double-click to drill down instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7075) Better styling for donut charts in new YARN UI

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450969#comment-16450969
 ] 

Hudson commented on YARN-7075:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-7075. Better styling for donut charts in new YARN UI. Contributed (xyao: 
rev c0b19a93a5d346753ca2b187f43318c178e089ea)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/components/donut-chart.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/styles/app.css


> Better styling for donut charts in new YARN UI
> --
>
> Key: YARN-7075
> URL: https://issues.apache.org/jira/browse/YARN-7075
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Da Ding
>Assignee: Da Ding
>Priority: Major
> Fix For: 3.0.0-beta1, 3.1.0
>
> Attachments: Screen Shot 2017-08-22 at 8.36.07 PM.png, Screen Shot 
> 2017-08-29 at 4.36.45 PM.png, Screen Shot 2017-08-31 at 8.23.18 PM.png, 
> yarn-7075.001.patch, yarn-7075.002.patch
>
>
> 1. Adjusted donut chart size to be slimmer
> 2. Modified chart container style to have modern feel.
> 3. Other changes like background and font.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6547) Enhance SLS-based tests leveraging invariant checker

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450845#comment-16450845
 ] 

Hudson commented on YARN-6547:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6547. Enhance SLS-based tests leveraging invariant checker. (xyao: rev 
c58bd15776814a53ffc550285f1528781b031787)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
* (add) hadoop-tools/hadoop-sls/src/test/resources/log4j.properties
* (edit) hadoop-tools/hadoop-sls/pom.xml
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java
* (add) hadoop-tools/hadoop-sls/src/test/resources/exit-invariants.txt
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/TestSLSRunner.java
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/TestReservationSystemInvariants.java
* (add) hadoop-tools/hadoop-sls/src/test/resources/ongoing-invariants.txt
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/BaseSLSRunnerTest.java


> Enhance SLS-based tests leveraging invariant checker
> 
>
> Key: YARN-6547
> URL: https://issues.apache.org/jira/browse/YARN-6547
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6547.v0.patch, YARN-6547.v1.patch, 
> YARN-6547.v2.patch, YARN-6547.v3.patch
>
>
> We can leverage {{InvariantChecker}}s to provide a more thorough validation 
> of SLS-based tests. This patch introduces invariants checking during and at 
> the end of the run.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6604) Allow metric TTL for Application table to be specified through cmd

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450849#comment-16450849
 ] 

Hudson commented on YARN-6604:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6604. Allow metric TTL for Application table to be specified (xyao: rev 
0887355d9cd85f909df66a2a73ba7db2768ef54f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java


> Allow metric TTL for Application table to be specified through cmd
> --
>
> Key: YARN-6604
> URL: https://issues.apache.org/jira/browse/YARN-6604
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: ATSv2
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Major
>  Labels: atsv2-hbase
> Fix For: 2.9.0, YARN-5355, YARN-5355-branch-2, 3.0.0-alpha4
>
> Attachments: YARN-6604.00.patch
>
>
> We should allow metrics TTL in application table to be specified in schema 
> cmd, as we do with metrics TTL in entity table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6458) Use yarn package manager to lock down dependency versions for new web UI

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450829#comment-16450829
 ] 

Hudson commented on YARN-6458:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6458. Use yarn package manager to lock down dependency versions for (xyao: 
rev f4fba3d0acd06f4b6617b82688d4904f45a08b2a)
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/.bowerrc
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/yarn.lock
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/bower-shrinkwrap.json
* (add) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md
* (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/WEB-INF/wro.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/ember-cli-build.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/package.json


> Use yarn package manager to lock down dependency versions for new web UI
> 
>
> Key: YARN-6458
> URL: https://issues.apache.org/jira/browse/YARN-6458
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sreenath Somarajapuram
>Assignee: Sreenath Somarajapuram
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6458.1.patch, YARN-6458.2.patch, YARN-6458.3.patch
>
>
> As we use semver to denote dependency version, every time a new build is 
> made, the latest available version of the dependency would be downloaded. 
> This affects the reliability of the UI build. Hence we must lockdown the 
> dependencies.
> Lockdown must happen in both the package managers used by the UI - NPM & 
> Bower.
> Yarn:
> Replace NPM with Yarn. Yarn is a package manager developed to solve this 
> issue and many more. It also enables offline build.
> Bower: 
> Bower shrinkwrap resolver plugin can be used to lock the dependency versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6634) [API] Refactor ResourceManager WebServices to make API explicit

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450851#comment-16450851
 ] 

Hudson commented on YARN-6634:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6634. [API] Refactor ResourceManager WebServices to make API (xyao: rev 
a5c15bca30d82196edff185267614ccc4a99cc67)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServiceProtocol.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWSConsts.java


> [API] Refactor ResourceManager WebServices to make API explicit
> ---
>
> Key: YARN-6634
> URL: https://issues.apache.org/jira/browse/YARN-6634
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6634-branch-2.v1.patch, YARN-6634.proto.patch, 
> YARN-6634.v1.patch, YARN-6634.v2.patch, YARN-6634.v3.patch, 
> YARN-6634.v4.patch, YARN-6634.v5.patch, YARN-6634.v6.patch, 
> YARN-6634.v7.patch, YARN-6634.v8.patch, YARN-6634.v9.patch
>
>
> The RM exposes few REST queries but there's no clear API interface defined. 
> This makes it painful to build either clients or extension components like 
> Router (YARN-5412) that expose REST interfaces themselves. This jira proposes 
> adding a RM WebServices protocol similar to the one we have for RPC, i.e. 
> {{ApplicationClientProtocol}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6683) Invalid event: COLLECTOR_UPDATE at KILLED

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450832#comment-16450832
 ] 

Hudson commented on YARN-6683:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6683. Invalid event: COLLECTOR_UPDATE at KILLED.  Contributed by (xyao: 
rev 2ad147ef2974dde5b08954f3bdf7218020ca23cf)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppCollectorUpdateEvent.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmapp/RMAppImpl.java


> Invalid event: COLLECTOR_UPDATE at KILLED
> -
>
> Key: YARN-6683
> URL: https://issues.apache.org/jira/browse/YARN-6683
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Assignee: Rohith Sharma K S
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6683.001.patch, YARN-6683.001.patch
>
>
> {code}
> 2017-06-01 20:01:22,686 ERROR rmapp.RMAppImpl (RMAppImpl.java:handle(905)) - 
> Can't handle this event at current state
> org.apache.hadoop.yarn.state.InvalidStateTransitionException: Invalid event: 
> COLLECTOR_UPDATE at KILLED
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:903)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl.handle(RMAppImpl.java:118)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:904)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher.handle(ResourceManager.java:888)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:201)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:127)
> {code}
> Below code already gets the RMApp instance and then send an event to RMApp to 
> update the collector address. Instead of updating via event, it could just 
> update via a method of RMApp. This also avoids state-machine changes.
> Also, is there any implications that COLLECTOR_UPDATE happened at KILLED 
> state ?
> {code}
>   } else {
> String previousCollectorAddr = rmApp.getCollectorAddr();
> if (previousCollectorAddr == null
> || !previousCollectorAddr.equals(collectorAddr)) {
>   // sending collector update event.
>   RMAppCollectorUpdateEvent event =
>   new RMAppCollectorUpdateEvent(appId, collectorAddr);
>   rmContext.getDispatcher().getEventHandler().handle(event);
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6555) Store application flow context in NM state store for work-preserving restart

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450794#comment-16450794
 ] 

Hudson commented on YARN-6555:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6555. Store application flow context in NM state store for (xyao: rev 
67d9c749211acdef0c2ad2dfcacfd172e86fd8f7)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/proto/yarn_server_nodemanager_recovery.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManagerRecovery.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/application/ApplicationImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java


> Store application flow context in NM state store for work-preserving restart
> 
>
> Key: YARN-6555
> URL: https://issues.apache.org/jira/browse/YARN-6555
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355, YARN-5355-branch-2, 3.0.0-alpha4
>Reporter: Vrushali C
>Assignee: Rohith Sharma K S
>Priority: Major
>  Labels: yarn-5355-merge-blocker
> Fix For: 2.9.0, YARN-5355, YARN-5355-branch-2, 3.0.0-alpha4
>
> Attachments: YARN-6555.001.patch, YARN-6555.002.patch, 
> YARN-6555.003.patch
>
>
> If timeline service v2 is enabled and NM is restarted with recovery enabled, 
> then NM fails to start and throws an error as  "flow context can't be null".
> This is happening because the flow context did not exist before but now that 
> timeline service v2 is enabled, ApplicationImpl expects it to exist. 
> This would also happen even if flow context existed before but since we are 
> not persisting it / reading it during 
> ContainerManagerImpl#recoverApplication, it does not get passed in to 
> ApplicationImpl.
> full stack trace
> {code}
> 2017-05-03 21:51:52,178 FATAL 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting 
> NodeManager
> java.lang.IllegalArgumentException: flow context cannot be null
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:104)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationImpl.(ApplicationImpl.java:90)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverApplication(ContainerManagerImpl.java:318)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:280)
> at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:267)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:276)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:588)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:649)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6643) TestRMFailover fails rarely due to port conflict

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450791#comment-16450791
 ] 

Hudson commented on YARN-6643:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6643. TestRMFailover fails rarely due to port conflict. Contributed (xyao: 
rev 7d17dd5ded9418036b669fd88557e56b0db38ece)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/HATestUtil.java


> TestRMFailover fails rarely due to port conflict
> 
>
> Key: YARN-6643
> URL: https://issues.apache.org/jira/browse/YARN-6643
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6643.001.patch
>
>
> We've seen various tests in {{TestRMFailover}} fail very rarely with a 
> message like "org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.io.IOException: ResourceManager failed to start. Final state is 
> STOPPED".  
> After some digging, it turns out that it's due to a port conflict with the 
> embedded ZooKeeper in the tests.  The embedded ZooKeeper uses 
> {{ServerSocketUtil#getPort}} to choose a free port, but the RMs are 
> configured to 1 +  and 2 +  (e.g. the 
> default port for the RM is 8032, so you'd use 18032 and 28032).
> When I was able to reproduce this, I saw that ZooKeeper was using port 18033, 
> which is 1 + 8033, the default RM Admin port.  It results in an error 
> like this, causing the RM to be unable to start, and hence the original error 
> message in the test failure:
> {noformat}
> 2017-05-24 01:16:52,735 INFO  service.AbstractService 
> (AbstractService.java:noteFailure(272)) - Service ResourceManager failed in 
> state STARTED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [0.0.0.0:18033] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [0.0.0.0:18033] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> at 
> org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl.getServer(RpcServerFactoryPBImpl.java:139)
> at 
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC.getServer(HadoopYarnProtoRPC.java:65)
> at org.apache.hadoop.yarn.ipc.YarnRPC.getServer(YarnRPC.java:54)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.startServer(AdminService.java:171)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.AdminService.serviceStart(AdminService.java:158)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:120)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1147)
> at 
> org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
> at 
> org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:310)
> Caused by: java.net.BindException: Problem binding to [0.0.0.0:18033] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:720)
> at org.apache.hadoop.ipc.Server.bind(Server.java:482)
> at org.apache.hadoop.ipc.Server$Listener.(Server.java:688)
> at org.apache.hadoop.ipc.Server.(Server.java:2376)
> at org.apache.hadoop.ipc.RPC$Server.(RPC.java:1042)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510)
> at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:887)
> at 
> org.apache.hadoop.yarn.factori

[jira] [Commented] (YARN-6635) Refactor yarn-app pages in new YARN UI.

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450805#comment-16450805
 ] 

Hudson commented on YARN-6635:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6635. Refactor yarn-app pages in new YARN UI. Contributed by Akhil (xyao: 
rev 6a481be33918381e0fd2b773c33260c2686debe8)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/charts.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-app/attempts-test.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-apps/services.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app/charts.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-app/info-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/charts.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/attempts.hbs
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-apps/services-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app/attempts.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-services.hbs
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app-attempts.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app-attempts.hbs
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-app-attempts-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-apps.hbs
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-apps/services.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/info.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/app-table-columns.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app/attempts.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/application.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/components/app-table.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/loading.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/router.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-app/attempts-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-app/charts-test.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-apps/services.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app-attempts.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app/info.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-app/charts-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-app/info-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-flowrun/info.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app-attempt.js
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-app-attempts-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-app/info.js


> Refactor yarn-app pages in new YARN UI.
> ---
>
> Key: YARN-6635
> URL: https://issues.apache.org/jira/browse/YARN-6635
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Akhil PB
>Assignee: Akhil PB
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6635.001.patch
>
>
> There are some refactoring done for yarn-app pages in new YARN UI codebase in 
> yarn-native-services branch. This ticket intends to bring the refactor

[jira] [Commented] (YARN-6641) Non-public resource localization on a bad disk causes subsequent containers failure

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450795#comment-16450795
 ] 

Hudson commented on YARN-6641:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6641. Non-public resource localization on a bad disk causes (xyao: rev 
86a3ad992cf4e22670b32f05d53b378e8f264198)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalResourcesTrackerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/LocalResourcesTrackerImpl.java


> Non-public resource localization on a bad disk causes subsequent containers 
> failure
> ---
>
> Key: YARN-6641
> URL: https://issues.apache.org/jira/browse/YARN-6641
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6641.001.patch, YARN-6641.002.patch, 
> YARN-6641.003.patch, YARN-6641.004.patch
>
>
> YARN-3591 added the {{checkLocalResource}} method to {{isResourcePresent()}} 
> call to allow checking an already localized resource against the list of 
> good/full directories.
> Since LocalResourcesTrackerImpl instantiations for app level resources and 
> private resources do not use the new constructor, such resources that are on 
> bad disk will never be checked against good dirs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6477) Dispatcher no longer needs the raw types suppression

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450808#comment-16450808
 ] 

Hudson commented on YARN-6477:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6477. Dispatcher no longer needs the raw types suppression. (Maya (xyao: 
rev 5511c4e575738e42bdd07b3fe14da6520ddbee06)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/Dispatcher.java


> Dispatcher no longer needs the raw types suppression
> 
>
> Key: YARN-6477
> URL: https://issues.apache.org/jira/browse/YARN-6477
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Maya Wexler
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6477.001.patch
>
>
> Post YARN-4457, the {{@SuppressWarnings("rawtypes")}} is no longer needed in 
> the Dispatcher class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6366) Refactor the NodeManager DeletionService to support additional DeletionTask types.

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450809#comment-16450809
 ] 

Hudson commented on YARN-6366:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6366. Refactor the NodeManager DeletionService to support (xyao: rev 
3c3685a4b86b263fcf716048523d5c82582107b3)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/deletion/task/package-info.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/BaseContainerManagerTest.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/deletion/task/TestFileDeletionTask.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/deletion/recovery/DeletionTaskRecoveryInfo.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DeletionService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestDeletionService.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/deletion/task/FileDeletionTask.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/proto/yarn_server_nodemanager_recovery.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/TestNodeManagerReboot.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/deletion/task/DeletionTaskType.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/TestAppLogAggregatorImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/deletion/task/FileDeletionMatcher.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/logaggregation/AppLogAggregatorImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestResourceLocalizationService.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/api/impl/pb/NMProtoUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/TestLocalResourcesTrackerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/ResourceLocalizationService.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/api/impl/pb/package-info.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/deletion/recovery/package-info.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/localizer/LocalResourcesTrackerImpl.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/deletion/task/DeletionTask.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/NonAggregatingLogHandler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/loghandler/TestNonAggregatingLogHandler.java
* (edit) 
hadoop-yarn-pr

[jira] [Commented] (YARN-6208) Improve the log when FinishAppEvent sent to the NodeManager which didn't run the application

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450827#comment-16450827
 ] 

Hudson commented on YARN-6208:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6208. Improve the log when FinishAppEvent sent to the NodeManager (xyao: 
rev 882891a643f9d43bb06801045309154d39f1d30e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java


> Improve the log when FinishAppEvent sent to the NodeManager which didn't run 
> the application
> 
>
> Key: YARN-6208
> URL: https://issues.apache.org/jira/browse/YARN-6208
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Minor
>  Labels: newbie, supportability
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6208.01.patch, YARN-6208.02.patch, 
> YARN-6208.03.patch, YARN-6208.04.patch
>
>
> When FinishAppEvent of an application is sent to a NodeManager and there are 
> no applications of the application ran on the NodeManager, we can see the 
> following log:
> {code}
> 2015-12-28 11:59:18,725 WARN 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl:
>  Event EventType: FINISH_APPLICATION sent to absent application 
> application_1446103803043_9892
> {code}
> YARN-4520 made the log as follows:
> {code}
>   LOG.warn("couldn't find application " + appID + " while processing"
>   + " FINISH_APPS event");
> {code}
> and I'm thinking it can be improved.
> * Make the log WARN from INFO
> * Add why the NodeManager couldn't find the application. For example, 
> "because there were no containers of the application ran on the NodeManager."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6582) FSAppAttempt demand can be updated atomically in updateDemand()

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450792#comment-16450792
 ] 

Hudson commented on YARN-6582:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6582. FSAppAttempt demand can be updated atomically in (xyao: rev 
a9010d31baf7affe7783732b5f1c9736506c3c91)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java


> FSAppAttempt demand can be updated atomically in updateDemand()
> ---
>
> Key: YARN-6582
> URL: https://issues.apache.org/jira/browse/YARN-6582
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6582.001.patch
>
>
> FSAppAttempt#updateDemand first sets demand to 0, and then adds up all the 
> outstanding requests. Instead, we could use another variable tmpDemand to 
> build the new value and atomically replace the demand.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6497) Method length of ResourceManager#serviceInit() is too long

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450811#comment-16450811
 ] 

Hudson commented on YARN-6497:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6497. Method length of ResourceManager#serviceInit() is too long (xyao: 
rev fee8342f6b988156e7df57f9a423c772f7bca8f9)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java


> Method length of ResourceManager#serviceInit() is too long
> --
>
> Key: YARN-6497
> URL: https://issues.apache.org/jira/browse/YARN-6497
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Gergely Novák
>Priority: Minor
>  Labels: checkstyle, newbie++
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6497.001.patch
>
>
> Reported by Style checking: Method length is 162 lines (max allowed is 150). 
> [MethodLength]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6316) Provide help information and documentation for TimelineSchemaCreator

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450824#comment-16450824
 ] 

Hudson commented on YARN-6316:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6316 Provide help information and documentation for (xyao: rev 
d48f2f68398609b84f08389bea8e44746f9f0d65)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/TimelineServiceV2.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/src/main/java/org/apache/hadoop/yarn/server/timelineservice/storage/TimelineSchemaCreator.java


> Provide help information and documentation for TimelineSchemaCreator
> 
>
> Key: YARN-6316
> URL: https://issues.apache.org/jira/browse/YARN-6316
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Haibo Chen
>Priority: Major
>  Labels: atsv2-hbase
> Fix For: 2.9.0, YARN-5355, YARN-5355-branch-2, 3.0.0-alpha4
>
> Attachments: YARN-6316.00.patch, YARN-6316.prelim.patch
>
>
> Right now there is no help information for timeline schema creator. We may 
> probably want to provide an option to print help. Also, ideally, if users 
> passed in no argument, we may want to print out help, instead of directly 
> create the tables. This will simplify cluster operations and timeline v2 
> deployments. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6246) Identifying starved apps does not need the scheduler writelock

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450814#comment-16450814
 ] 

Hudson commented on YARN-6246:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6246. Identifying starved apps does not need the scheduler (xyao: rev 
dd7b6fb3cd072776d50e8828d0c8d2cdda0c20cc)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FairScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSParentQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSLeafQueue.java


> Identifying starved apps does not need the scheduler writelock
> --
>
> Key: YARN-6246
> URL: https://issues.apache.org/jira/browse/YARN-6246
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6246.001.patch, YARN-6246.002.patch, 
> YARN-6246.003.patch, YARN-6246.004.patch, YARN-6246.005.patch
>
>
> Currently, the starvation checks are done holding the scheduler writelock. We 
> are probably better of doing this outside. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6646) Modifier 'static' is redundant for inner enums

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450796#comment-16450796
 ] 

Hudson commented on YARN-6646:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6646. Modifier 'static' is redundant for inner enums (Contributed (xyao: 
rev 1a48d5865463997d2685e1c6c74870ef92d9e61c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/RMAuditLogger.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/CMgrCompletedContainersEvent.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/timelineservice/TimelineMetric.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/CMgrCompletedAppsEvent.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/main/java/org/apache/hadoop/yarn/applications/distributedshell/ApplicationMaster.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/reader/filter/TimelineFilterList.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NMAuditLogger.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/metrics/AbstractSystemMetricsPublisher.java


> Modifier 'static' is redundant for inner enums
> --
>
> Key: YARN-6646
> URL: https://issues.apache.org/jira/browse/YARN-6646
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6646.001.patch
>
>
> Java enumeration type is a static constant, implicitly modified with static 
> final,Modifier 'static' is redundant for inner enums less.So I suggest 
> deleting the 'static' modifier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6649) RollingLevelDBTimelineServer throws RuntimeException if object decoding ever fails runtime exception

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450813#comment-16450813
 ] 

Hudson commented on YARN-6649:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6649. RollingLevelDBTimelineServer throws RuntimeException if (xyao: rev 
177c0c1523ad8b1004070f16807ee225fa577523)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/RollingLevelDBTimelineStore.java


> RollingLevelDBTimelineServer throws RuntimeException if object decoding ever 
> fails runtime exception
> 
>
> Key: YARN-6649
> URL: https://issues.apache.org/jira/browse/YARN-6649
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
>Priority: Critical
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6649.1.patch, YARN-6649.2.patch
>
>
> When Using tez ui (makes REST api calls to timeline service REST api), some 
> calls were coming back as 500 internal server error. The root cause was 
> YARN-6654. This jira is to handle object decoding to prevent sending back 
> internal server errors to the client and instead respond with a partial 
> message instead.
> {code}
> 2017-05-30 12:47:10,670 WARN 
> org.apache.hadoop.yarn.webapp.GenericExceptionHandler: INTERNAL_SERVER_ERROR
> javax.ws.rs.WebApplicationException: java.lang.RuntimeException: 
> java.io.IOException: java.lang.RuntimeException: unable to encodeValue class 
> from code 1000
>   at 
> org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices.getEntity(TimelineWebServices.java:164)
>   at sun.reflect.GeneratedMethodAccessor24.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.sun.jersey.spi.container.JavaMethodInvokerFactory$1.invoke(JavaMethodInvokerFactory.java:60)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$TypeOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:185)
>   at 
> com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
>   at 
> com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:288)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
>   at 
> com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
>   at 
> com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1469)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1400)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
>   at 
> com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
>   at 
> com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:886)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:834)
>   at 
> com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:795)
>   at 
> com.google.inject.servlet.FilterDefinition.doFilter(FilterDefinition.java:163)
>   at 
> com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:58)
>   at 
> com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:118)
>   at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
>   at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:636)
>   at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.doFilter(DelegationTokenAuthenticationFilter.java:294)
>   at 
> org.apache.hadoop.security.authentication.server.Authe

[jira] [Commented] (YARN-6249) TestFairSchedulerPreemption fails inconsistently.

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450759#comment-16450759
 ] 

Hudson commented on YARN-6249:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6249. TestFairSchedulerPreemption fails inconsistently. (Tao Jie (xyao: 
rev 7851ae12cf2d019c5cc694d6a370946312ad533f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairSchedulerPreemption.java


> TestFairSchedulerPreemption fails inconsistently.
> -
>
> Key: YARN-6249
> URL: https://issues.apache.org/jira/browse/YARN-6249
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler, resourcemanager
>Affects Versions: 2.9.0
>Reporter: Sean Po
>Assignee: Tao Jie
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6249.001.patch, YARN-6249.002.patch
>
>
> Tests in TestFairSchedulerPreemption.java will inconsistently fail on trunk. 
> An example stack trace: 
> {noformat}
> Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 24.879 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption
> testPreemptionSelectNonAMContainer[MinSharePreemptionWithDRF](org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption)
>   Time elapsed: 10.475 sec  <<< FAILURE!
> java.lang.AssertionError: Incorrect number of containers on the greedy app 
> expected:<4> but was:<8>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.verifyPreemption(TestFairSchedulerPreemption.java:288)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption.testPreemptionSelectNonAMContainer(TestFairSchedulerPreemption.java:363)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6584) Correct license headers in hadoop-common, hdfs, yarn and mapreduce

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450765#comment-16450765
 ] 

Hudson commented on YARN-6584:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6584. Correct license headers in hadoop-common, hdfs, yarn and (xyao: rev 
8ec366b0f81e07724a729a7c870cf1e98ec27bc5)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FilterFs.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/protocol/DirectoryListing.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordMedian.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/src/test/java/org/apache/hadoop/mapreduce/security/TestMRCredentials.java
* (edit) 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosName.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/job/impl/TestShuffleProvider.java
* (edit) 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/bloom/TestBloomFilters.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/tools/TestTools.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordStandardDeviation.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/JobContextImpl.java
* (edit) 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/recovery/records/TimelineDelegationTokenIdentifierData.java
* (edit) 
hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/shade/resource/ServicesResourceTransformer.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapred/TaskAttemptContextImpl.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/bloom/BloomFilterCommonTester.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapreduce/v2/app/webapp/TestAMWebServicesAttempts.java
* (edit) 
hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordMean.java


> Correct license headers in hadoop-common, hdfs, yarn and mapreduce
> --
>
> Key: YARN-6584
> URL: https://issues.apache.org/jira/browse/YARN-6584
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6584-001.patch, YARN-6584-branch-2.001.patch, 
> YARN-6584-branch-2.002.patch, YARN-6584-branch2.001.patch
>
>
> The license in some java files are not same as others. Submit a patch to fix 
> this!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6577) Remove unused ContainerLocalization classes

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450750#comment-16450750
 ] 

Hudson commented on YARN-6577:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6577. Remove unused ContainerLocalization classes. Contributed by (xyao: 
rev 714963f003fb383b1501961a57a979a23b59493c)
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerLocalizationImpl.java
* (delete) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerLocalization.java


> Remove unused ContainerLocalization classes
> ---
>
> Key: YARN-6577
> URL: https://issues.apache.org/jira/browse/YARN-6577
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.3, 3.0.0-alpha2
>Reporter: ZhangBing Lin
>Assignee: ZhangBing Lin
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6577.001.patch
>
>
> From 2.7.3  and 3.0.0-alpha2, the ContainerLocalization interface and the 
> ContainerLocalizationImpl implementation class are of no use, and I recommend 
> removing the useless interface and implementation classes



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450768#comment-16450768
 ] 

Hudson commented on YARN-2113:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-2113. Add cross-user preemption within CapacityScheduler's (xyao: rev 
56785ab28df1152698a35042773ae2bdd816bb9f)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/FifoIntraQueuePreemptionPlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempQueuePerPartition.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempUserPerPartition.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueueUserLimit.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/IntraQueuePreemptionComputePlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TempAppPerPartition.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DefaultResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/IntraQueueCandidatesSelector.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/DominantResourceCalculator.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueueWithDRF.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/TestProportionalCapacityPreemptionPolicyIntraQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/Resources.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/resource/ResourceCalculator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/ProportionalCapacityPreemptionPolicyMockFramework.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/capacity/CapacitySchedulerPreemptionUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/UsersManager.java


> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> T

[jira] [Commented] (YARN-6493) Print requested node partition in assignContainer logs

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450769#comment-16450769
 ] 

Hudson commented on YARN-6493:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6493. Print requested node partition in assignContainer logs. (xyao: rev 
4a8814b7e11c5bd4830e30e57780febeb4c53bba)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java


> Print requested node partition in assignContainer logs
> --
>
> Key: YARN-6493
> URL: https://issues.apache.org/jira/browse/YARN-6493
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0, 2.7.4, 2.6.6
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 2.9.0, 2.7.4, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6493-branch-2.7.001.patch, 
> YARN-6493-branch-2.7.002.patch, YARN-6493-branch-2.8.001.patch, 
> YARN-6493-branch-2.8.002.patch, YARN-6493-branch-2.8.003.patch, 
> YARN-6493.001.patch, YARN-6493.002.patch, YARN-6493.003.patch
>
>
> It would be useful to have the node's partition when logging a container 
> allocation, for tracking purposes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6602) Impersonation does not work if standby RM is contacted first

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450760#comment-16450760
 ] 

Hudson commented on YARN-6602:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6602. Impersonation does not work if standby RM is contacted first (xyao: 
rev faf36776c73997185e8a048e39366e4519639d95)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/ClientRMProxy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RequestHedgingRMFailoverProxyProvider.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/ConfiguredRMFailoverProxyProvider.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RMProxy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/client/TestClientRMProxy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/ServerRMProxy.java


> Impersonation does not work if standby RM is contacted first
> 
>
> Key: YARN-6602
> URL: https://issues.apache.org/jira/browse/YARN-6602
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 3.0.0-alpha4
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6602.001.patch, YARN-6602.002.patch
>
>
> When RM HA is enabled, impersonation does not work correctly if the Yarn 
> Client connects to the standby RM first.  When this happens, the 
> impersonation is "lost" and the client does things on behalf of the 
> impersonator user.  We saw this with the OOZIE-1770 Oozie on Yarn feature.
> I need to investigate this some more, but it appears to be related to 
> delegation tokens.  When this issue occurs, the tokens have the owner as 
> "oozie" instead of the actual user.  On a hunch, we found a workaround that 
> explicitly adding a correct RM HA delegation token fixes the problem:
> {code:java}
> org.apache.hadoop.yarn.api.records.Token token = 
> yarnClient.getRMDelegationToken(ClientRMProxy.getRMDelegationTokenService(conf));
> org.apache.hadoop.security.token.Token token2 = new 
> org.apache.hadoop.security.token.Token(token.getIdentifier().array(), 
> token.getPassword().array(), new Text(token.getKind()), new 
> Text(token.getService()));
> UserGroupInformation.getCurrentUser().addToken(token2);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6560) SLS doesn't honor node total resource specified in sls-runner.xml

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450752#comment-16450752
 ] 

Hudson commented on YARN-6560:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6560. SLS doesn't honor node total resource specified in (xyao: rev 
8bee566786d5051569efbb6ddf78680ffc96e60a)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java


> SLS doesn't honor node total resource specified in sls-runner.xml
> -
>
> Key: YARN-6560
> URL: https://issues.apache.org/jira/browse/YARN-6560
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6560.1.patch, YARN-6560.2.patch, YARN-6560.3.patch
>
>
> Now SLSRunner extends ToolRunner, so setConf will be called twice: once in 
> the init() of SLSRunner and once in ToolRunner. The later one will overwrite 
> the previous one so it won't correctly load sls-runner.xml



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6111) Rumen input does't work in SLS

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450764#comment-16450764
 ] 

Hudson commented on YARN-6111:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6111. Rumen input does't work in SLS. Contributed by Yufei Gu. (xyao: rev 
c87683d5fd5155cec5c44d677ba0cf5b78a56d82)
* (edit) hadoop-tools/hadoop-sls/src/main/data/2jobs2min-rumen-jh.json


> Rumen input does't work in SLS
> --
>
> Key: YARN-6111
> URL: https://issues.apache.org/jira/browse/YARN-6111
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Affects Versions: 2.6.0, 2.7.3, 3.0.0-alpha2
> Environment: ubuntu14.0.4 os
>Reporter: YuJie Huang
>Assignee: Yufei Gu
>Priority: Major
>  Labels: test
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6111.001.patch
>
>
> Hi guys,
> I am trying to learn the use of SLS.
> I would like to get the file realtimetrack.json, but this it only 
> contains "[]" at the end of a simulation. This is the command I use to 
> run the instance:
> HADOOP_HOME $ bin/slsrun.sh --input-rumen=sample-data/2jobsmin-rumen-jh.json 
> --output-dir=sample-data 
> All other files, including metrics, appears to be properly populated.I can 
> also trace with web:http://localhost:10001/simulate
> Can someone help?
> Thanks



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6615) AmIpFilter drops query parameters on redirect

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450775#comment-16450775
 ] 

Hudson commented on YARN-6615:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6615. AmIpFilter drops query parameters on redirect. Contributed by (xyao: 
rev 88e00969c2c13a7ef2d5650247b82beb4b615a47)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/main/java/org/apache/hadoop/yarn/server/webproxy/amfilter/AmIpFilter.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/src/test/java/org/apache/hadoop/yarn/server/webproxy/amfilter/TestAmFilter.java


> AmIpFilter drops query parameters on redirect
> -
>
> Key: YARN-6615
> URL: https://issues.apache.org/jira/browse/YARN-6615
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha2
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Major
> Fix For: 2.9.0, 2.7.4, 2.6.6, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6615-branch-2.6.1.patch, 
> YARN-6615-branch-2.6.2.patch, YARN-6615-branch-2.6.3.patch, 
> YARN-6615-branch-2.8.1.patch, YARN-6615.1.patch
>
>
> When an AM web request is redirected to the RM the query parameters are 
> dropped from the web request.
> This happens for Spark as described in SPARK-20772.
> The repro steps are:
> - Start up the spark-shell in yarn mode and run a job
> - Try to access the job details through http://:4040/jobs/job?id=0
> - A HTTP ERROR 400 is thrown (requirement failed: missing id parameter)
> This works fine in local or standalone mode, but does not work on Yarn where 
> the query parameter is dropped. If the UI filter 
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter is removed from 
> the config which shows that the problem is in the filter



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5705) Show timeline data from ATS v2 in new web UI

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450763#comment-16450763
 ] 

Hudson commented on YARN-5705:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-5705. Show timeline data from ATS v2 in new web UI. Contributed by (xyao: 
rev 29cbb8cb7733008b82ed4ecdd4fa1c946d5d8482)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/timeline-error-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-entity.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-flowrun.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-app-attempt.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-app-flowrun.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/yarn-timeline-appattempt-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/serializers/yarn-timeline-container-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-flowrun-brief.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-flowrun/metrics-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app-attempt.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/application.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/timeline-error.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-flow.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-flow/runs-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-flowrun-brief.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-timeline-container.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-flowrun/metrics.hbs
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-container.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-flow-activity.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-timeline-container-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-timeline-container.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/models/yarn-entity.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/mixins/app-attempt.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-flow/runs.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-flowrun-brief.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-flowrun.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-queue.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/templates/yarn-flowrun-metric.hbs
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/models/yarn-timeline-appattempt-test.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/yarn-app.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-app-attempt.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/routes/timeline-error.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/utils/converter.js
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/controllers/yarn-services.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-flow/info-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/adapters/yarn-timeline-container-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/routes/yarn-flow/runs-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/adapters/yarn-flow-activity.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/app/serializers/yarn-entity.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/integration/components/simple-bar-chart-test.js
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/tests/unit/controllers/yarn-flowrun/info-test.js
* (add) 
hadoop-ya

[jira] [Commented] (YARN-6141) ppc64le on Linux doesn't trigger __linux get_executable codepath

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450784#comment-16450784
 ] 

Hudson commented on YARN-6141:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6141. ppc64le on Linux doesn't trigger __linux get_executable (xyao: rev 
70fac5d1d53983c59516afd410e0b066a46c1232)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/native/container-executor/impl/get_executable.c


> ppc64le on Linux doesn't trigger __linux get_executable codepath
> 
>
> Key: YARN-6141
> URL: https://issues.apache.org/jira/browse/YARN-6141
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha4
> Environment: $ uname -a
> Linux f8eef0f055cf 3.16.0-30-generic #40~14.04.1-Ubuntu SMP Thu Jan 15 
> 17:42:36 UTC 2015 ppc64le ppc64le ppc64le GNU/Linux
>Reporter: Sonia Garudi
>Assignee: Ayappan
>Priority: Major
>  Labels: ppc64le
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6141.patch
>
>
> On ppc64le architecture, the build fails in the 'Hadoop YARN NodeManager' 
> project with the below error:
> Cannot safely determine executable path with a relative HADOOP_CONF_DIR on 
> this operating system.
> [WARNING]  #error Cannot safely determine executable path with a relative 
> HADOOP_CONF_DIR on this operating system.
> [WARNING]   ^
> [WARNING] make[2]: *** 
> [CMakeFiles/container.dir/main/native/container-executor/impl/get_executable.c.o]
>  Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] make[1]: *** [CMakeFiles/container.dir/all] Error 2
> [WARNING] make: *** [all] Error 2
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> Cmake version used :
> $ /usr/bin/cmake --version
> cmake version 2.8.12.2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6618) TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction occurs more than once

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450755#comment-16450755
 ] 

Hudson commented on YARN-6618:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6618. TestNMLeveldbStateStoreService#testCompactionCycle can fail (xyao: 
rev 3b481de001bd09f2ecffa63efcf00402dded11ea)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/recovery/TestNMLeveldbStateStoreService.java


> TestNMLeveldbStateStoreService#testCompactionCycle can fail if compaction 
> occurs more than once
> ---
>
> Key: YARN-6618
> URL: https://issues.apache.org/jira/browse/YARN-6618
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6618.001.patch
>
>
> The testCompactionCycle unit test is verifying that the compaction cycle 
> occurs after startup, but rarely the compaction cycle can occur more than 
> once which fails the test.  The unit test needs to account for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6627) Use deployed webapp folder to launch new YARN UI

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450761#comment-16450761
 ] 

Hudson commented on YARN-6627:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6627. Use deployed webapp folder to launch new YARN UI. Contributed (xyao: 
rev 952b26d7c5cff17c5dffbe304963ed6e3e182b0c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) hadoop-assemblies/src/main/resources/assemblies/hadoop-yarn-dist.xml


> Use deployed webapp folder to launch new YARN UI
> 
>
> Key: YARN-6627
> URL: https://issues.apache.org/jira/browse/YARN-6627
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-ui-v2
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6627.0001.patch, YARN-6627.0002.patch, 
> YARN-6627.0003.patch, YARN-6627.branch-2.001.patch
>
>
> Currently new ui war file is placed in share/hadoop/yarn folder. Along with 
> this, its better to have ui2 folder placed under share/hadoop/yarn/webapps so 
> that UI could be launched from a defined folder.
> This will also help to make some cutom config related to ui as well.
> cc/[~jianhe]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6540) Resource Manager is spelled "Resource Manger" in ResourceManagerRestart.md and ResourceManagerHA.md

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6540?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450757#comment-16450757
 ] 

Hudson commented on YARN-6540:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6540. Resource Manager is spelled "Resource Manger" in (xyao: rev 
9bc4df69ee75d5c6b134f518dbec781a64859998)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerHA.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/ResourceManagerRestart.md


> Resource Manager is spelled "Resource Manger" in ResourceManagerRestart.md 
> and ResourceManagerHA.md
> ---
>
> Key: YARN-6540
> URL: https://issues.apache.org/jira/browse/YARN-6540
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: site
>Reporter: Grant Sohn
>Assignee: Grant Sohn
>Priority: Trivial
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6540.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6598) History server getApplicationReport NPE when fetching report for pre-2.8 job

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450732#comment-16450732
 ] 

Hudson commented on YARN-6598:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6598. History server getApplicationReport NPE when fetching report (xyao: 
rev 229cb89c317f31e2f64ba5b6ff0ae817ad4aa00c)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/applicationhistoryservice/ApplicationHistoryManagerOnTimelineStore.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/test/java/org/apache/hadoop/yarn/server/applicationhistoryservice/TestApplicationHistoryManagerOnTimelineStore.java


> History server getApplicationReport NPE when fetching report for pre-2.8 job
> 
>
> Key: YARN-6598
> URL: https://issues.apache.org/jira/browse/YARN-6598
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.0
>Reporter: Jason Lowe
>Assignee: Jason Lowe
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-alpha4, 2.8.2
>
> Attachments: YARN-6598-branch-2.8.001.patch, YARN-6598.001.patch
>
>
> ApplicationHistoryManagerOnTimelineStore#convertToApplicationReport can NPE 
> for a job that was run prior to the cluster upgrading to 2.8.  It blindly 
> assumes preemption metrics are present when CPU metrics are present, and when 
> they are not it triggers the NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6380) FSAppAttempt keeps redundant copy of the queue

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450722#comment-16450722
 ] 

Hudson commented on YARN-6380:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6380. FSAppAttempt keeps redundant copy of the queue (xyao: rev 
37ea21449ce0ca99698d8ecebb0ce81dc7cad01d)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java


> FSAppAttempt keeps redundant copy of the queue
> --
>
> Key: YARN-6380
> URL: https://issues.apache.org/jira/browse/YARN-6380
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.0.0-alpha2
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6380.001.patch, YARN-6380.002.patch, 
> YARN-6380.003.patch, YARN-6380.004.patch, YARN-6380.005.patch
>
>
> The {{FSAppAttempt}} class defines its own {{fsQueue}} variable that is a 
> second copy of the {{SchedulerApplicationAttempt}}'s {{queue}} variable.  
> Aside from being redundant, it's also a bug, because when moving 
> applications, we only update the {{SchedulerApplicationAttempt}}'s {{queue}}, 
> not the {{FSAppAttempt}}'s {{fsQueue}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6580) Incorrect logger for FairSharePolicy

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450727#comment-16450727
 ] 

Hudson commented on YARN-6580:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6580. Incorrect logger for FairSharePolicy. (Vrushali C via Haibo (xyao: 
rev 15c7526e2cd3ce2098b786fcc5366379065560fb)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/policies/FairSharePolicy.java


> Incorrect logger for FairSharePolicy
> 
>
> Key: YARN-6580
> URL: https://issues.apache.org/jira/browse/YARN-6580
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0, 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Vrushali C
>Priority: Minor
>  Labels: newbie++
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6580.001.patch
>
>
> {code}
> public class FairSharePolicy extends SchedulingPolicy {
>   private static final Log LOG = LogFactory.getLog(FifoPolicy.class);
> {code}
> should be {{LogFactory.getLog(FairSharePolicy.class);}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6535) Program needs to exit when SLS finishes.

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450739#comment-16450739
 ] 

Hudson commented on YARN-6535:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6535. Program needs to exit when SLS finishes. (yufeigu via (xyao: rev 
3defe7d85bef65a7d7bbdb821fc2f1f0d4381519)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java


> Program needs to exit when SLS finishes. 
> -
>
> Key: YARN-6535
> URL: https://issues.apache.org/jira/browse/YARN-6535
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Affects Versions: 3.0.0-alpha2
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6535.001.patch, YARN-6535.002.patch, 
> YARN-6535.003.patch
>
>
> Program need to exit when SLS finishes except in unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6571) Fix JavaDoc issues in SchedulingPolicy

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450715#comment-16450715
 ] 

Hudson commented on YARN-6571:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6571. Fix JavaDoc issues in SchedulingPolicy (Contributed by Weiwei (xyao: 
rev 5a7176ad3b54850a2ce3fbc4b969e46f50704d54)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/SchedulingPolicy.java


> Fix JavaDoc issues in SchedulingPolicy
> --
>
> Key: YARN-6571
> URL: https://issues.apache.org/jira/browse/YARN-6571
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Weiwei Yang
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6571.001.patch, YARN-6571.002.patch, 
> YARN-6571.003.patch
>
>
> There are several javadoc issues:
> * Class JavaDoc is missing.
> * {{getInstance()}} is missing {{@return}} and {{@param}} tags.
> * {{parse()}} is missing {{@return}} tag and description for {{@throws}} tag.
> * {{checkIfUsageOverFairShare()}} is missing a period at the end of the first 
> sentence.
> * {{getHeadroom()}} should use {code}{@code}{code} instead of {{}} tags.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6473) Create ReservationInvariantChecker to validate ReservationSystem + Scheduler operations

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450716#comment-16450716
 ] 

Hudson commented on YARN-6473:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6473. Create ReservationInvariantChecker to validate (xyao: rev 
e66c4305107e73c1d4fab595c7da689c6d7fe107)
* (add) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/TestReservationSystemInvariants.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/monitor/invariants/ReservationInvariantsChecker.java


> Create ReservationInvariantChecker to validate ReservationSystem + Scheduler 
> operations
> ---
>
> Key: YARN-6473
> URL: https://issues.apache.org/jira/browse/YARN-6473
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>Priority: Major
> Fix For: 3.0.0-alpha4
>
> Attachments: YARN-6473.v0.patch, YARN-6473.v1.patch, 
> YARN-6473.v2.patch
>
>
> This JIRA tracks an application of YARN-6451 ideas to the ReservationSystem. 
> It is in particularly useful to create integration tests, or for test 
> clusters, where we can continuously (and possibly costly) check the 
> ReservationSystem + Scheduler are operating as expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6563) ConcurrentModificationException in TimelineCollectorManager while stopping RM

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450700#comment-16450700
 ] 

Hudson commented on YARN-6563:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6563 ConcurrentModificationException in TimelineCollectorManager (xyao: 
rev ae743ff258f041e495d823a1494571d32bf47cc5)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice/src/main/java/org/apache/hadoop/yarn/server/timelineservice/collector/TimelineCollectorManager.java


> ConcurrentModificationException in TimelineCollectorManager while stopping RM
> -
>
> Key: YARN-6563
> URL: https://issues.apache.org/jira/browse/YARN-6563
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Haibo Chen
>Priority: Major
> Fix For: 2.9.0, YARN-5355, YARN-5355-branch-2, 3.0.0-alpha4
>
> Attachments: YARN-6563.00.patch
>
>
> It is seen that ConcurrentModificationException while stopping RM when ATSv2 
> enabled. 
> {noformat}
> 2017-05-05 15:04:11,563 WARN org.apache.hadoop.service.CompositeService: When 
> stopping the service 
> org.apache.hadoop.yarn.server.resourcemanager.timelineservice.RMTimelineCollectorManager
>  : java.util.ConcurrentModificationException
> java.util.ConcurrentModificationException
>   at java.util.HashMap$HashIterator.nextNode(HashMap.java:1437)
>   at java.util.HashMap$ValueIterator.next(HashMap.java:1466)
>   at 
> org.apache.hadoop.yarn.server.timelineservice.collector.TimelineCollectorManager.serviceStop(TimelineCollectorManager.java:222)
>   at 
> org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
>   at 
> org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
>   at 
> org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
>   at 
> org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
>   at 
> org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStop(ResourceManager.java:1285)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6306) NMClient API change for container upgrade

2018-04-24 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16450737#comment-16450737
 ] 

Hudson commented on YARN-6306:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14057 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14057/])
YARN-6306. NMClient API change for container upgrade. Contributed by (xyao: rev 
f8be02703a7df5ec59cd070584b3e126b3d6c0ae)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestNMClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/NMClientAsync.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/NMClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/NMClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/async/impl/TestNMClientAsync.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/container/ContainerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/async/impl/NMClientAsyncImpl.java


> NMClient API change for container upgrade
> -
>
> Key: YARN-6306
> URL: https://issues.apache.org/jira/browse/YARN-6306
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Arun Suresh
>Priority: Major
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: YARN-6306.001.patch, YARN-6306.002.patch, 
> YARN-6306.003.patch, YARN-6306.004.patch
>
>
> This JIRA is track the addition of Upgrade API (Re-Initialize, Restart, 
> Rollback and Commit) to the NMClient and NMClientAsync



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >