[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584462#comment-15584462
 ] 

Hadoop QA commented on YARN-4597:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 2s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 29s {color} 
| {color:red} root generated 2 new + 700 unchanged - 2 fixed = 702 total (was 
702) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 45s 
{color} | {color:red} root: The patch generated 37 new + 958 unchanged - 14 
fixed = 995 total (was 972) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 236 unchanged - 1 fixed = 236 total (was 237) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-yarn-server-tests in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | 

[jira] [Commented] (YARN-5746) The state of the parentQueue and its childQueues should be synchronized.

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584155#comment-15584155
 ] 

Hadoop QA commented on YARN-5746:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 67 unchanged - 0 fixed = 70 total (was 67) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 9s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMAdminService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833859/YARN-5746.1.patch |
| JIRA Issue | YARN-5746 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1f1c6163b531 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b61fb26 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13418/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13418/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/13418/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13418/testReport/ |
| modules | C: 

[jira] [Commented] (YARN-5694) ZKRMStateStore should only start its verification thread when in HA failover is not embedded

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584153#comment-15584153
 ] 

Hadoop QA commented on YARN-5694:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
11s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} branch-2.7 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} branch-2.7 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 2s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in branch-2.7 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} branch-2.7 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} branch-2.7 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 227 unchanged - 4 fixed = 228 total (was 231) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1290 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 34s 
{color} | {color:red} The patch 70 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 26s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.8.0_101. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 32s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 131m 37s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK 

[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584119#comment-15584119
 ] 

ASF GitHub Bot commented on YARN-4597:
--

GitHub user xslogic opened a pull request:

https://github.com/apache/hadoop/pull/143

YARN-4597: initial commit



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/xslogic/hadoop YARN-4597

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/143.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #143


commit 44f42805c78a2a605889e5ff757a9996525b99e5
Author: Arun Suresh 
Date:   2016-10-18T01:49:53Z

YARN-4597: initial commit




> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior

2016-10-17 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584116#comment-15584116
 ] 

Junping Du commented on YARN-5718:
--

Just filed YARN-5748. We have have more discussion there.

> TimelineClient (and other places in YARN) shouldn't over-write HDFS client 
> retry settings which could cause unexpected behavior
> ---
>
> Key: YARN-5718
> URL: https://issues.apache.org/jira/browse/YARN-5718
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineclient
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: YARN-5718-v2.1.patch, YARN-5718-v2.patch, YARN-5718.patch
>
>
> In one HA cluster, after NN failed over, we noticed that job is getting 
> failed as TimelineClient failed to retry connection to proper NN. This is 
> because we are overwrite hdfs client settings that hard code retry policy to 
> be enabled that conflict NN failed-over case - hdfs client should fail fast 
> so can retry on another NN.
> We shouldn't assume any retry policy for hdfs client at all places in YARN. 
> This should keep consistent with HDFS settings that has different retry 
> polices in different deployment case. Thus, we should clean up these hard 
> code settings in YARN, include: FileSystemTimelineWriter, 
> FileSystemRMStateStore and FileSystemNodeLabelsStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5748) Backport YARN-5718 to branch-2

2016-10-17 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584113#comment-15584113
 ] 

Junping Du commented on YARN-5748:
--

I don't think option 1 should be our choice. I would prefer option 3 over 2. 
Thoughts?

> Backport YARN-5718 to branch-2
> --
>
> Key: YARN-5748
> URL: https://issues.apache.org/jira/browse/YARN-5748
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Junping Du
>
> In YARN-5718, we have identify several unnecessary config to over-write HDFS 
> client behavior in several components of YARN (FSRMStore, TimelineClient, 
> NodeLabelStore, etc.) which cause job failure in some cases (NN HA, etc.) - 
> that's definitely belongs to bug. In YARN-5718, we proposed to remove the 
> config as it shouldn't be supposed to work, which get committed to trunk 
> already as alpha stage has more flexibility for incompatible changes. In 
> branch-2, we want to play a bit more safe and get more discussion. 
> Obviously, there are several options here:
> 1. Don't fix anything, let bug exist
> 2. Fix the bug, but keep the configuration, or mark it deprecated and add 
> some explanation to say this configuration is not supposed to work any more.
> 3. Exactly like YARN-5718, fix the bug and remove the unnecessary 
> configuration.
> This ticket is filed for more discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5748) Backport YARN-5718 to branch-2

2016-10-17 Thread Junping Du (JIRA)
Junping Du created YARN-5748:


 Summary: Backport YARN-5718 to branch-2
 Key: YARN-5748
 URL: https://issues.apache.org/jira/browse/YARN-5748
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Junping Du
Assignee: Junping Du


In YARN-5718, we have identify several unnecessary config to over-write HDFS 
client behavior in several components of YARN (FSRMStore, TimelineClient, 
NodeLabelStore, etc.) which cause job failure in some cases (NN HA, etc.) - 
that's definitely belongs to bug. In YARN-5718, we proposed to remove the 
config as it shouldn't be supposed to work, which get committed to trunk 
already as alpha stage has more flexibility for incompatible changes. In 
branch-2, we want to play a bit more safe and get more discussion. 
Obviously, there are several options here:
1. Don't fix anything, let bug exist
2. Fix the bug, but keep the configuration, or mark it deprecated and add some 
explanation to say this configuration is not supposed to work any more.
3. Exactly like YARN-5718, fix the bug and remove the unnecessary configuration.
This ticket is filed for more discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5718) TimelineClient (and other places in YARN) shouldn't over-write HDFS client retry settings which could cause unexpected behavior

2016-10-17 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584084#comment-15584084
 ] 

Junping Du commented on YARN-5718:
--

Thanks [~xgong] for review and comments. File a separate one for 
branch-2/branch-2.8 sounds good to me. Will do it soon.

> TimelineClient (and other places in YARN) shouldn't over-write HDFS client 
> retry settings which could cause unexpected behavior
> ---
>
> Key: YARN-5718
> URL: https://issues.apache.org/jira/browse/YARN-5718
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, timelineclient
>Reporter: Junping Du
>Assignee: Junping Du
> Attachments: YARN-5718-v2.1.patch, YARN-5718-v2.patch, YARN-5718.patch
>
>
> In one HA cluster, after NN failed over, we noticed that job is getting 
> failed as TimelineClient failed to retry connection to proper NN. This is 
> because we are overwrite hdfs client settings that hard code retry policy to 
> be enabled that conflict NN failed-over case - hdfs client should fail fast 
> so can retry on another NN.
> We shouldn't assume any retry policy for hdfs client at all places in YARN. 
> This should keep consistent with HDFS settings that has different retry 
> polices in different deployment case. Thus, we should clean up these hard 
> code settings in YARN, include: FileSystemTimelineWriter, 
> FileSystemRMStateStore and FileSystemNodeLabelsStore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3816) [Aggregation] App-level aggregation and accumulation for YARN system metrics

2016-10-17 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584034#comment-15584034
 ] 

Li Lu commented on YARN-3816:
-

Created YARN-5747 for a fix on the aggregation problem. This fix should be 
target to trunk. Linking the two issues. 

> [Aggregation] App-level aggregation and accumulation for YARN system metrics
> 
>
> Key: YARN-3816
> URL: https://issues.apache.org/jira/browse/YARN-3816
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Junping Du
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Fix For: 3.0.0-alpha1
>
> Attachments: Application Level Aggregation of Timeline Data.pdf, 
> YARN-3816-YARN-2928-v1.patch, YARN-3816-YARN-2928-v2.1.patch, 
> YARN-3816-YARN-2928-v2.2.patch, YARN-3816-YARN-2928-v2.3.patch, 
> YARN-3816-YARN-2928-v2.patch, YARN-3816-YARN-2928-v3.1.patch, 
> YARN-3816-YARN-2928-v3.patch, YARN-3816-YARN-2928-v4.patch, 
> YARN-3816-YARN-2928-v5.patch, YARN-3816-YARN-2928-v6.patch, 
> YARN-3816-YARN-2928-v7.patch, YARN-3816-YARN-2928-v8.patch, 
> YARN-3816-YARN-2928-v9.patch, YARN-3816-feature-YARN-2928.v4.1.patch, 
> YARN-3816-poc-v1.patch, YARN-3816-poc-v2.patch
>
>
> We need application level aggregation of Timeline data:
> - To present end user aggregated states for each application, include: 
> resource (CPU, Memory) consumption across all containers, number of 
> containers launched/completed/failed, etc. We need this for apps while they 
> are running as well as when they are done.
> - Also, framework specific metrics, e.g. HDFS_BYTES_READ, should be 
> aggregated to show details of states in framework level.
> - Other level (Flow/User/Queue) aggregation can be more efficient to be based 
> on Application-level aggregations rather than raw entity-level data as much 
> less raws need to scan (with filter out non-aggregated entities, like: 
> events, configurations, etc.).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5747) Application timeline metric aggregation in timeline v2 will lost last round aggregation when an application finishes

2016-10-17 Thread Li Lu (JIRA)
Li Lu created YARN-5747:
---

 Summary: Application timeline metric aggregation in timeline v2 
will lost last round aggregation when an application finishes
 Key: YARN-5747
 URL: https://issues.apache.org/jira/browse/YARN-5747
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineserver
Reporter: Li Lu
Assignee: Li Lu


As discussed in YARN-3816, when an application finishes we should perform an 
extra round of application level timeline aggregation. Otherwise data posted 
after the last round of aggregation will get lost. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3816) [Aggregation] App-level aggregation and accumulation for YARN system metrics

2016-10-17 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584027#comment-15584027
 ] 

Li Lu commented on YARN-3816:
-

bq. Currently they emit them every 3 seconds, but I think we should do a time 
average on the NMTimelinePublisher and emit them less often. It may help in 
this regard.
Yes. And we may simply extend TimelineMetricOperation to support time based 
accumulation operations, then apply those operations when accumulating. 

> [Aggregation] App-level aggregation and accumulation for YARN system metrics
> 
>
> Key: YARN-3816
> URL: https://issues.apache.org/jira/browse/YARN-3816
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Junping Du
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Fix For: 3.0.0-alpha1
>
> Attachments: Application Level Aggregation of Timeline Data.pdf, 
> YARN-3816-YARN-2928-v1.patch, YARN-3816-YARN-2928-v2.1.patch, 
> YARN-3816-YARN-2928-v2.2.patch, YARN-3816-YARN-2928-v2.3.patch, 
> YARN-3816-YARN-2928-v2.patch, YARN-3816-YARN-2928-v3.1.patch, 
> YARN-3816-YARN-2928-v3.patch, YARN-3816-YARN-2928-v4.patch, 
> YARN-3816-YARN-2928-v5.patch, YARN-3816-YARN-2928-v6.patch, 
> YARN-3816-YARN-2928-v7.patch, YARN-3816-YARN-2928-v8.patch, 
> YARN-3816-YARN-2928-v9.patch, YARN-3816-feature-YARN-2928.v4.1.patch, 
> YARN-3816-poc-v1.patch, YARN-3816-poc-v2.patch
>
>
> We need application level aggregation of Timeline data:
> - To present end user aggregated states for each application, include: 
> resource (CPU, Memory) consumption across all containers, number of 
> containers launched/completed/failed, etc. We need this for apps while they 
> are running as well as when they are done.
> - Also, framework specific metrics, e.g. HDFS_BYTES_READ, should be 
> aggregated to show details of states in framework level.
> - Other level (Flow/User/Queue) aggregation can be more efficient to be based 
> on Application-level aggregations rather than raw entity-level data as much 
> less raws need to scan (with filter out non-aggregated entities, like: 
> events, configurations, etc.).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3816) [Aggregation] App-level aggregation and accumulation for YARN system metrics

2016-10-17 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15584012#comment-15584012
 ] 

Sangjin Lee commented on YARN-3816:
---

{quote}
Would it be better to use time weighted average for aggregated metrics. For 
instance, we aggregate metrics every 15 sec. And in that period container 
metrics would be reported 4-5 times. Right now, we take the latest reported 
metrics which means a momentary spike or very low value can influence the 
aggregated metric value. A time weighted average for each container may avoid 
application aggregated metrics being influenced by momentary blips in CPU 
usage. However, this in real scenario may balance out when multiple containers 
are running concurrently.
{quote}

We also had a discussion on how often node managers should publish container 
metrics (YARN-4712 and YARN-4821). Currently they emit them every 3 seconds, 
but I think we should do a time average on the {{NMTimelinePublisher}} and emit 
them less often. It may help in this regard.

> [Aggregation] App-level aggregation and accumulation for YARN system metrics
> 
>
> Key: YARN-3816
> URL: https://issues.apache.org/jira/browse/YARN-3816
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Junping Du
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Fix For: 3.0.0-alpha1
>
> Attachments: Application Level Aggregation of Timeline Data.pdf, 
> YARN-3816-YARN-2928-v1.patch, YARN-3816-YARN-2928-v2.1.patch, 
> YARN-3816-YARN-2928-v2.2.patch, YARN-3816-YARN-2928-v2.3.patch, 
> YARN-3816-YARN-2928-v2.patch, YARN-3816-YARN-2928-v3.1.patch, 
> YARN-3816-YARN-2928-v3.patch, YARN-3816-YARN-2928-v4.patch, 
> YARN-3816-YARN-2928-v5.patch, YARN-3816-YARN-2928-v6.patch, 
> YARN-3816-YARN-2928-v7.patch, YARN-3816-YARN-2928-v8.patch, 
> YARN-3816-YARN-2928-v9.patch, YARN-3816-feature-YARN-2928.v4.1.patch, 
> YARN-3816-poc-v1.patch, YARN-3816-poc-v2.patch
>
>
> We need application level aggregation of Timeline data:
> - To present end user aggregated states for each application, include: 
> resource (CPU, Memory) consumption across all containers, number of 
> containers launched/completed/failed, etc. We need this for apps while they 
> are running as well as when they are done.
> - Also, framework specific metrics, e.g. HDFS_BYTES_READ, should be 
> aggregated to show details of states in framework level.
> - Other level (Flow/User/Queue) aggregation can be more efficient to be based 
> on Application-level aggregations rather than raw entity-level data as much 
> less raws need to scan (with filter out non-aggregated entities, like: 
> events, configurations, etc.).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5746) The state of the parentQueue and its childQueues should be synchronized.

2016-10-17 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5746:

Attachment: YARN-5746.1.patch

> The state of the parentQueue and its childQueues should be synchronized.
> 
>
> Key: YARN-5746
> URL: https://issues.apache.org/jira/browse/YARN-5746
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5746.1.patch
>
>
> The state of the parentQueue and its childQeues need to be synchronized. 
> * If the state of the parentQueue becomes STOPPED, the state of its 
> childQueue need to become STOPPED as well. 
> * If we change the state of the queue to RUNNING, we should make sure the 
> state of all its ancestor must be RUNNING. Otherwise, we need to fail this 
> operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-10-17 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4597:
--
Attachment: YARN-4597.003.patch

Updating patch based on the thoughtful reviews from [~vvasudev], [~kasha] and 
[~jianhe] ..

bq. The methods for killing containers as needed all seem to be hardcoded to 
only consider allocated resources. Can we abstract it out further to allow for 
passing either allocation or utilization based on whether oversubscription is 
enabled
In the latest patch, I've introduced a {{ResourceUtilizationManager}}, the 
default implementation accumulates the allocated container resources. This can 
be made pluggable once we have resource utilization.

bq. Can you explain why we need the synchronized block here - {code} +
synchronized (this.containersAllocation) {code}
Aah.. its not required, removed it since access to this class is actually 
serialized.

bq. resourcesToFreeUp is initialized to container allocation on the node. 
Shouldn't it be initialized to zero? May be I am missing something
The algorithm is as follows, assuming an NM has 10 slots, they are currently 
full (7 guaranteed and 3 opportunistic), and assume an incoming Guar container 
request:
# Start with currently utilized/allocated slots : 10
# Increase value of 1) with any guaranteed containers yet to start : 10 + 1(say 
1 guaranteed request had come in before this) = 11
# Subtract from 2) total resources available to all containers : 11 - 10 = 1
# From 3) keep subtracting resources of all running Opportunistic containers 
(which have not already been marked to kill, in reverse startup order) till the 
value goes < 0.
# Kill all opportunistic containers identified in 4.

bq.  I see there are two code paths leading to starting a container for when 
enough resources are available or not. Did you consider a single path where we 
queue containers directly and let another thread launch them.
Good point. I did think about that, but that would introduce having the logic 
in a dedicated thread, which can be used to run containers. I was thinking we 
keep it as it is right now till a point when we really warrant that complexity.

The tests (except {{TestContainer}}) all seem to run fine locally.



> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3816) [Aggregation] App-level aggregation and accumulation for YARN system metrics

2016-10-17 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583958#comment-15583958
 ] 

Li Lu commented on YARN-3816:
-

Hi [~varun_saxena], please see my comments inline...
bq. We do not aggregate the entities reported since last aggregation run when 
app collector finishes. Is this intentional ? We however would miss only the 
last set of metrics which should be fine.
That's not intentional... I remember this bug and I have the impression that I 
once worked on a fix but seems like there is no JIRA to trace this work. I'll 
open a JIRA and trace the fix...
bq. We also have aggregation interval fixed at 15 sec. Has it not been made 
configurable due to concerns with somebody setting it too low or too high ?
Having a system wide configuration may not be enough since app running times 
vary a lot. So you're right that for now we're assuming the 15 secs interval to 
avoid misconfigurations. At the same time, we may want to explore different 
ways to allow applications set their own config...
bq. Would it be better to use time weighted average for aggregated metrics.
I agree it is helpful. However, I believe this is slightly different to the 
"aggregation" we talk about here. As Sangjin mentioned before, "aggregation" in 
this JIRA mainly means applying an aggregation method to all *subparts'* 
metrics to get the parent's metric, like aggregating CPU usage for all 
containers to get the CPU usage of the whole app attempt. 

What you've mention here, IIUC, is something closer to the concept 
"accumulation" as we discussed before. Accumulation will apply an accumulative 
method on the same metric for the same timeline entity *across time*. We have 
not yet started the work of accumulation, but my feeling is we can make it work 
together with the aggregation framework without much changes to the code 
framework. 

> [Aggregation] App-level aggregation and accumulation for YARN system metrics
> 
>
> Key: YARN-3816
> URL: https://issues.apache.org/jira/browse/YARN-3816
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Junping Du
>Assignee: Li Lu
>  Labels: yarn-2928-1st-milestone
> Fix For: 3.0.0-alpha1
>
> Attachments: Application Level Aggregation of Timeline Data.pdf, 
> YARN-3816-YARN-2928-v1.patch, YARN-3816-YARN-2928-v2.1.patch, 
> YARN-3816-YARN-2928-v2.2.patch, YARN-3816-YARN-2928-v2.3.patch, 
> YARN-3816-YARN-2928-v2.patch, YARN-3816-YARN-2928-v3.1.patch, 
> YARN-3816-YARN-2928-v3.patch, YARN-3816-YARN-2928-v4.patch, 
> YARN-3816-YARN-2928-v5.patch, YARN-3816-YARN-2928-v6.patch, 
> YARN-3816-YARN-2928-v7.patch, YARN-3816-YARN-2928-v8.patch, 
> YARN-3816-YARN-2928-v9.patch, YARN-3816-feature-YARN-2928.v4.1.patch, 
> YARN-3816-poc-v1.patch, YARN-3816-poc-v2.patch
>
>
> We need application level aggregation of Timeline data:
> - To present end user aggregated states for each application, include: 
> resource (CPU, Memory) consumption across all containers, number of 
> containers launched/completed/failed, etc. We need this for apps while they 
> are running as well as when they are done.
> - Also, framework specific metrics, e.g. HDFS_BYTES_READ, should be 
> aggregated to show details of states in framework level.
> - Other level (Flow/User/Queue) aggregation can be more efficient to be based 
> on Application-level aggregations rather than raw entity-level data as much 
> less raws need to scan (with filter out non-aggregated entities, like: 
> events, configurations, etc.).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4843) [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to int64

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4843:
--
Priority: Critical  (was: Major)

> [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to 
> int64
> -
>
> Key: YARN-4843
> URL: https://issues.apache.org/jira/browse/YARN-4843
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0-alpha1
>Reporter: Wangda Tan
>Priority: Critical
>
> This JIRA is to track all int32 usages in YARN's ProtocolBuffer APIs that we 
> possibly need to update to int64.
> One example is resource API. We use int32 for memory now, if a cluster has 
> 10k nodes, each node has 210G memory, we will get a negative total cluster 
> memory.
> We may have other fields may need to upgrade from int32 to int64. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4843) [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to int64

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-4843:
--
Affects Version/s: 3.0.0-alpha1
 Target Version/s: 3.0.0-alpha2  (was: )

Hi folks, looks like there are still some more subtasks here. Do we think we 
can get them done for alpha2?

> [Umbrella] Revisit YARN ProtocolBuffer int32 usages that need to upgrade to 
> int64
> -
>
> Key: YARN-4843
> URL: https://issues.apache.org/jira/browse/YARN-4843
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0-alpha1
>Reporter: Wangda Tan
>
> This JIRA is to track all int32 usages in YARN's ProtocolBuffer APIs that we 
> possibly need to update to int64.
> One example is resource API. We use int32 for memory now, if a cluster has 
> 10k nodes, each node has 210G memory, we will get a negative total cluster 
> memory.
> We may have other fields may need to upgrade from int32 to int64. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5709) Cleanup Curator-based leader election code

2016-10-17 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton reassigned YARN-5709:
--

Assignee: Daniel Templeton

> Cleanup Curator-based leader election code
> --
>
> Key: YARN-5709
> URL: https://issues.apache.org/jira/browse/YARN-5709
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Critical
>
> While reviewing YARN-5677 and YARN-5694, I noticed we could make the 
> curator-based election code cleaner. It is nicer to get this fixed in 2.8 
> before we ship it, but this can be done at a later time as well. 
> # By EmbeddedElector, we meant it was running as part of the RM daemon. Since 
> the Curator-based elector is also running embedded, I feel the code should be 
> checking for {{!curatorBased}} instead of {{isEmbeddedElector}}
> # {{LeaderElectorService}} should probably be named 
> {{CuratorBasedEmbeddedElectorService}} or some such.
> # The code that initializes the elector should be at the same place 
> irrespective of whether it is curator-based or not. 
> # We seem to be caching the CuratorFramework instance in RM. It makes more 
> sense for it to be in RMContext. If others are okay with it, we might even be 
> better of having {{RMContext#getCurator()}} method to lazily create the 
> curator framework and then cache it. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5685) Non-embedded HA failover is broken

2016-10-17 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583940#comment-15583940
 ] 

Daniel Templeton commented on YARN-5685:


Now that I've worked out the kinks on YARN-5677 and YARN-5694, I think the 
correct approach here is to deprecate the embedded property and complain loudly 
if anyone sets it to false, e.g. fail.  That's much preferable to coming up 
with all RMs in standby.  Unless someone disagrees, I'll post a patch for that 
shortly.

> Non-embedded HA failover is broken
> --
>
> Key: YARN-5685
> URL: https://issues.apache.org/jira/browse/YARN-5685
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9.0, 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>
> If HA is enabled with automatic failover enabled and embedded failover 
> disabled, all RMs all come up in standby state.  To make one of them active, 
> the {{--forcemanual}} flag must be used when manually triggering the state 
> change.  Should the active go down, the standby will not become active and 
> must be manually transitioned with the {{--forcemanual}} flag.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5694) ZKRMStateStore should only start its verification thread when in HA failover is not embedded

2016-10-17 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5694:
---
Attachment: YARN-5694.branch-2.7.003.patch
YARN-5694.003.patch

Looks like I got my patches crossed.  Let's try this again.

> ZKRMStateStore should only start its verification thread when in HA failover 
> is not embedded
> 
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.branch-2.7.001.patch, 
> YARN-5694.branch-2.7.002.patch, YARN-5694.branch-2.7.003.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583884#comment-15583884
 ] 

Hadoop QA commented on YARN-4734:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 6s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 7s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 25s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
12s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 36 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 9s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-assemblies hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 23s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 7s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 183m 40s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
|   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
|   | 

[jira] [Commented] (YARN-5694) ZKRMStateStore should only start its verification thread when in HA failover is not embedded

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583869#comment-15583869
 ] 

Hadoop QA commented on YARN-5694:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s {color} 
| {color:red} YARN-5694 does not apply to branch-2.7. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833837/YARN-5694.branch-2.7.002.patch
 |
| JIRA Issue | YARN-5694 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13415/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> ZKRMStateStore should only start its verification thread when in HA failover 
> is not embedded
> 
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5743) [Atsv2] Publish queue name and RMAppMetrics to ATS

2016-10-17 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583866#comment-15583866
 ] 

Sangjin Lee commented on YARN-5743:
---

Thanks for filing this issue and the patch [~rohithsharma]. It looks quite 
reasonable. Please address Varun's comments and we should be close.

Do we want this on the YARN-5355 branch only or on the trunk as well?

> [Atsv2] Publish queue name and RMAppMetrics to ATS
> --
>
> Key: YARN-5743
> URL: https://issues.apache.org/jira/browse/YARN-5743
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5743.patch
>
>
> App queue name is missed to publish to ATSv2. 
> And RMAppMetrcs publish only cpu and memory. There are many more things to 
> publish from app metrics such as 
>  resourcePreempted;
>  numNonAMContainersPreempted;
> numAMContainersPreempted.
> And RMAppMetrics need to be published to App metrics rather than info. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5184) Fix up incompatible changes introduced in YARN-2882

2016-10-17 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583862#comment-15583862
 ] 

Andrew Wang commented on YARN-5184:
---

Looks like this is something we should address in 3.0.0-alpha2.

> Fix up incompatible changes introduced in YARN-2882
> ---
>
> Key: YARN-5184
> URL: https://issues.apache.org/jira/browse/YARN-5184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
>
> YARN-2882 broke compatibility by adding abstract methods to a Public-Stable 
> class - ContainerStatus. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5184) Fix up incompatible changes introduced in YARN-2882

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5184:
--
Target Version/s: 2.9.0, 3.0.0-alpha2  (was: 2.9.0)

> Fix up incompatible changes introduced in YARN-2882
> ---
>
> Key: YARN-5184
> URL: https://issues.apache.org/jira/browse/YARN-5184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
>
> YARN-2882 broke compatibility by adding abstract methods to a Public-Stable 
> class - ContainerStatus. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4220) [Storage implementation] Support getEntities with only Application id but no userId

2016-10-17 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583852#comment-15583852
 ] 

Li Lu commented on YARN-4220:
-

After we finished UI integration and streamlined most UI use cases in 
YARN-5561, I believe this JIRA can be closed? Is there a special need to 
support the use case proposed in this JIRA? 

> [Storage implementation] Support getEntities with only Application id but no 
> userId
> ---
>
> Key: YARN-4220
> URL: https://issues.apache.org/jira/browse/YARN-4220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: YARN-5355
>
> Currently we're enforcing flow and flowrun id to be non-null values on 
> {{getEntities}}. We can actually query the appToFlow table to figure out an 
> application's flow id and flowrun id if they're missing. This will simplify 
> normal queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3914) Entity created time should be part of the row key of entity table

2016-10-17 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu resolved YARN-3914.
-
Resolution: Won't Fix

Close this issue as discussed before. The community reached an agreement to not 
to move forward on this issue. BTW, part of the problems raised in this issue 
can be addressed by the entity prefix design proposed in YARN-5715. 

> Entity created time should be part of the row key of entity table
> -
>
> Key: YARN-3914
> URL: https://issues.apache.org/jira/browse/YARN-3914
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
>  Labels: YARN-5355
>
> Entity created time should be part of the row key of entity table, between 
> entity type and entity Id. The reason to have it is to index the entities. 
> Though we cannot index the entities for all kinds of information, indexing 
> them according to the created time is very necessary. Without it, every query 
> for the latest entities that belong to an application and a type will scan 
> through all the entities that belong to them. For example, if we want to list 
> the 100 latest started containers in an YARN app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5694) ZKRMStateStore should only start its verification thread when in HA failover is not embedded

2016-10-17 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5694:
---
Attachment: YARN-5694.branch-2.7.002.patch
YARN-5694.002.patch

These patches make the verify status thread non-optional and also resolve a 
locking issue when stopping the service.  I'm not sure what to do about tests 
yet...

> ZKRMStateStore should only start its verification thread when in HA failover 
> is not embedded
> 
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5733) Run PerNodeTimelineCollectorsAuxService in a system container

2016-10-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen resolved YARN-5733.
--
Resolution: Duplicate

> Run PerNodeTimelineCollectorsAuxService in a system container
> -
>
> Key: YARN-5733
> URL: https://issues.apache.org/jira/browse/YARN-5733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> This is mostly for tracking yarn-5732 (Run auxiliary services in system 
> containers) because under current implementation, TimelineCollectManager is 
> implemented as an auxiliary service. We'd expect yarn-5732 to be transparent 
> to all auxiliary services, therefore, there is minimal work in here other 
> than checking.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3220) Create a Service in the RM to concatenate aggregated logs

2016-10-17 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter resolved YARN-3220.
-
Resolution: Won't Fix

Closing this as "Won't Fix" given we have MAPREDUCE-6415.

> Create a Service in the RM to concatenate aggregated logs
> -
>
> Key: YARN-3220
> URL: https://issues.apache.org/jira/browse/YARN-3220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>
> Create an {{RMAggregatedLogsConcatenationService}} in the RM that will 
> concatenate the aggregated log files written by the NM (which are in the new 
> {{ConcatableAggregatedLogFormat}} format) when an application finishes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3728) Add an rmadmin command to compact concatenated aggregated logs

2016-10-17 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter resolved YARN-3728.
-
Resolution: Won't Fix

Closing this as "Won't Fix" given we have MAPREDUCE-6415.

> Add an rmadmin command to compact concatenated aggregated logs
> --
>
> Key: YARN-3728
> URL: https://issues.apache.org/jira/browse/YARN-3728
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>
> Create an {{rmadmin}} command to compact any concatenated aggregated log 
> files it finds in the aggregated logs directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3219) Modify the NM to write logs using the ConcatenatableAggregatedLogFormat

2016-10-17 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter resolved YARN-3219.
-
Resolution: Won't Fix

Closing this as "Won't Fix" given we have MAPREDUCE-6415.

> Modify the NM to write logs using the ConcatenatableAggregatedLogFormat
> ---
>
> Key: YARN-3219
> URL: https://issues.apache.org/jira/browse/YARN-3219
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>
> The NodeManager should use the {{ConcatenatableAggregatedLogFormat}} from 
> YARN-3218 instead of the {{AggregatedLogFormat}} for writing aggregated log 
> files to HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3729) Modify the yarn CLI to be able to read the ConcatenatableAggregatedLogFormat

2016-10-17 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter resolved YARN-3729.
-
Resolution: Won't Fix

Closing this as "Won't Fix" given we have MAPREDUCE-6415.

> Modify the yarn CLI to be able to read the ConcatenatableAggregatedLogFormat
> 
>
> Key: YARN-3729
> URL: https://issues.apache.org/jira/browse/YARN-3729
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: client
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
>
> When serving logs, the {{yarn}} CLI needs to be able to read the 
> ConcatenatableAggregatedLogFormat or the AggregatedLogFormat transparently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-3218) Implement ConcatenatableAggregatedLogFormat Reader and Writer

2016-10-17 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter resolved YARN-3218.
-
Resolution: Won't Fix

Closing this as "Won't Fix" given we have MAPREDUCE-6415.

> Implement ConcatenatableAggregatedLogFormat Reader and Writer
> -
>
> Key: YARN-3218
> URL: https://issues.apache.org/jira/browse/YARN-3218
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 2.8.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-3218.001.patch, YARN-3218.002.patch
>
>
> We need to create a Reader and Writer for the 
> {{ConcatenatableAggregatedLogFormat}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-2942) Aggregated Log Files should be combined

2016-10-17 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter resolved YARN-2942.
-
  Resolution: Won't Fix
Target Version/s:   (was: 2.8.0)

Closing this as "Won't Fix" given we have MAPREDUCE-6415.

> Aggregated Log Files should be combined
> ---
>
> Key: YARN-2942
> URL: https://issues.apache.org/jira/browse/YARN-2942
> Project: Hadoop YARN
>  Issue Type: New Feature
>Affects Versions: 2.6.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: CombinedAggregatedLogsProposal_v3.pdf, 
> CombinedAggregatedLogsProposal_v6.pdf, CombinedAggregatedLogsProposal_v7.pdf, 
> CompactedAggregatedLogsProposal_v1.pdf, 
> CompactedAggregatedLogsProposal_v2.pdf, 
> ConcatableAggregatedLogsProposal_v4.pdf, 
> ConcatableAggregatedLogsProposal_v5.pdf, 
> ConcatableAggregatedLogsProposal_v8.pdf, YARN-2942-preliminary.001.patch, 
> YARN-2942-preliminary.002.patch, YARN-2942.001.patch, YARN-2942.002.patch, 
> YARN-2942.003.patch
>
>
> Turning on log aggregation allows users to easily store container logs in 
> HDFS and subsequently view them in the YARN web UIs from a central place.  
> Currently, there is a separate log file for each Node Manager.  This can be a 
> problem for HDFS if you have a cluster with many nodes as you’ll slowly start 
> accumulating many (possibly small) files per YARN application.  The current 
> “solution” for this problem is to configure YARN (actually the JHS) to 
> automatically delete these files after some amount of time.  
> We should improve this by compacting the per-node aggregated log files into 
> one log file per application.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-17 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583579#comment-15583579
 ] 

Eric Payne commented on YARN-2009:
--

Thanks for the updated patch [~sunilg].

Should this feature work with labelled queues? I notice that it does not. I 
think it is because the following code is getting the used capacity only for 
the default partition:
{code:title=IntraQueueCandidatesSelector#computeIntraQueuePreemptionDemand}
if (leafQueue.getUsedCapacity() < context
.getMinimumThresholdForIntraQueuePreemption()) {
  continue;
}
{code}
The above code has access to the partition, so it should be easy to get the 
used capacity per partition. Perhaps something like the following:
{code:title=IntraQueueCandidatesSelector#computeIntraQueuePreemptionDemand}
if (leafQueue.getQueueCapacities().getUsedCapacity(partition) < context
.getMinimumThresholdForIntraQueuePreemption()) {
  continue;
}
{code}

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5552) Add Builder methods for common yarn API records

2016-10-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583572#comment-15583572
 ] 

Wangda Tan edited comment on YARN-5552 at 10/17/16 9:40 PM:


Thanks [~Tao Jie].

Few minor comments from my side: 

1) New added methods of AllocateResponse should be all @Private and @Unstable, 
we should not mark original write-methods of AllocateResponse to @Public at the 
beginning, the reason is AllocateResponse should be read-only by YARN app, no 
YARN app should set these fields. YARN services should take care of settings 
these fields of AllocateResponse.

2) Can we merge common sanity check logic of ContainerRequestBuilder#build and 
ContainerRequest#constructor?

3) For all new added write methods of builder, it's better to add a javadocs 
link-to reference original method.


was (Author: leftnoteasy):
Thanks [~Tao Jie].

Few minor comments from my side: 

1) New added methods of AllocateResponse should be all @Private and @Unstable, 
we should not mark original write-methods of AllocateResponse to @Public at the 
beginning, the reason is AllocateResponse should be read-only, no YARN app 
should set these fields.

2) Can we merge common sanity check logic of ContainerRequestBuilder#build and 
ContainerRequest#constructor?

3) For all new added write methods of builder, it's better to add a javadocs 
link-to reference original method.

> Add Builder methods for common yarn API records
> ---
>
> Key: YARN-5552
> URL: https://issues.apache.org/jira/browse/YARN-5552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Tao Jie
> Attachments: YARN-5552.000.patch, YARN-5552.001.patch, 
> YARN-5552.002.patch, YARN-5552.003.patch, YARN-5552.004.patch, 
> YARN-5552.005.patch
>
>
> Currently yarn API records such as ResourceRequest, AllocateRequest/Respone 
> as well as AMRMClient.ContainerRequest have multiple constructors / 
> newInstance methods. This makes it very difficult to add new fields to these 
> records.
> It would probably be better if we had Builder classes for many of these 
> records, which would make evolution of these records a bit easier.
> (suggested by [~kasha])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5552) Add Builder methods for common yarn API records

2016-10-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583572#comment-15583572
 ] 

Wangda Tan commented on YARN-5552:
--

Thanks [~Tao Jie].

Few minor comments from my side: 

1) New added methods of AllocateResponse should be all @Private and @Unstable, 
we should not mark original write-methods of AllocateResponse to @Public at the 
beginning, the reason is AllocateResponse should be read-only, no YARN app 
should set these fields.

2) Can we merge common sanity check logic of ContainerRequestBuilder#build and 
ContainerRequest#constructor?

3) For all new added write methods of builder, it's better to add a javadocs 
link-to reference original method.

> Add Builder methods for common yarn API records
> ---
>
> Key: YARN-5552
> URL: https://issues.apache.org/jira/browse/YARN-5552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Tao Jie
> Attachments: YARN-5552.000.patch, YARN-5552.001.patch, 
> YARN-5552.002.patch, YARN-5552.003.patch, YARN-5552.004.patch, 
> YARN-5552.005.patch
>
>
> Currently yarn API records such as ResourceRequest, AllocateRequest/Respone 
> as well as AMRMClient.ContainerRequest have multiple constructors / 
> newInstance methods. This makes it very difficult to add new fields to these 
> records.
> It would probably be better if we had Builder classes for many of these 
> records, which would make evolution of these records a bit easier.
> (suggested by [~kasha])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5466) DefaultContainerExecutor needs JavaDocs

2016-10-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583570#comment-15583570
 ] 

Hudson commented on YARN-5466:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10627 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10627/])
YARN-5466. DefaultContainerExecutor needs JavaDocs (templedf via (rkanter: rev 
f5d92359145dfb820a9521e00e2d44c4ee96e67e)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/WindowsSecureContainerExecutor.java


> DefaultContainerExecutor needs JavaDocs
> ---
>
> Key: YARN-5466
> URL: https://issues.apache.org/jira/browse/YARN-5466
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5466.001.patch, YARN-5466.002.patch, 
> YARN-5466.003.patch, YARN-5466.004.patch, YARN-5466.005.patch
>
>
> Following on YARN-5455, let's document the DefaultContainerExecutor as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5579) Resourcemanager should surface failed state store operation prominently

2016-10-17 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated YARN-5579:
-
Labels: states  (was: )

> Resourcemanager should surface failed state store operation prominently
> ---
>
> Key: YARN-5579
> URL: https://issues.apache.org/jira/browse/YARN-5579
> Project: Hadoop YARN
>  Issue Type: Task
>Affects Versions: 2.7.3
>Reporter: Ted Yu
>  Labels: states
>
> I found the following in Resourcemanager log when I tried to figure out why 
> application got stuck in NEW_SAVING state.
> {code}
> 2016-08-29 18:14:23,486 INFO  recovery.ZKRMStateStore 
> (ZKRMStateStore.java:runWithRetries(1242)) - Maxed out ZK retries. Giving up!
> 2016-08-29 18:14:23,486 ERROR recovery.RMStateStore 
> (RMStateStore.java:transition(205)) - Error storing app: 
> application_1470517915158_0001
> org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
> AuthFailed
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:935)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:998)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:995)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1174)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1207)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:995)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:1009)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.createWithRetries(ZKRMStateStore.java:1042)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.storeApplicationStateInternal(ZKRMStateStore.java:639)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:201)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$StoreAppTransition.transition(RMStateStore.java:183)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore.handleStoreEvent(RMStateStore.java:955)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:1036)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler.handle(RMStateStore.java:1031)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
> at java.lang.Thread.run(Thread.java:745)
> 2016-08-29 18:14:23,486 ERROR recovery.RMStateStore 
> (RMStateStore.java:notifyStoreOperationFailedInternal(987)) - State store 
> operation failed
> org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = 
> AuthFailed
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
> at org.apache.zookeeper.ZooKeeper.multiInternal(ZooKeeper.java:935)
> at org.apache.zookeeper.ZooKeeper.multi(ZooKeeper.java:915)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:998)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$5.run(ZKRMStateStore.java:995)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithCheck(ZKRMStateStore.java:1174)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore$ZKAction.runWithRetries(ZKRMStateStore.java:1207)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore.doStoreMultiWithRetries(ZKRMStateStore.java:995)
> at 
> 

[jira] [Commented] (YARN-5466) DefaultContainerExecutor needs JavaDocs

2016-10-17 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583530#comment-15583530
 ] 

Robert Kanter commented on YARN-5466:
-

+1

> DefaultContainerExecutor needs JavaDocs
> ---
>
> Key: YARN-5466
> URL: https://issues.apache.org/jira/browse/YARN-5466
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: YARN-5466.001.patch, YARN-5466.002.patch, 
> YARN-5466.003.patch, YARN-5466.004.patch, YARN-5466.005.patch
>
>
> Following on YARN-5455, let's document the DefaultContainerExecutor as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583422#comment-15583422
 ] 

Hadoop QA commented on YARN-2009:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 41 
new + 178 unchanged - 30 fixed = 219 total (was 208) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 5s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 22s 
{color} | {color:red} hadoop-yarn-project_hadoop-yarn generated 2 new + 6496 
unchanged - 1 fixed = 6498 total (was 6497) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 2 new + 937 unchanged - 1 fixed = 939 total (was 938) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 23s {color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 51s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 1s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 

[jira] [Commented] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583418#comment-15583418
 ] 

Hadoop QA commented on YARN-4734:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-YARN-Build/13414/console in case of 
problems.


> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.2.patch, YARN-4734.3.patch, 
> YARN-4734.4.patch, YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, 
> YARN-4734.8.patch, YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4734) Merge branch:YARN-3368 to trunk

2016-10-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4734:
-
Attachment: YARN-4734.14.patch

Attached ver.14 patch. This patch should addressed all concerns which raised by 
[~aw] at last merge discussion.

This patch included changes of two not yet merged patch: YARN-5741 and 
YARN-5745. Both of them will be committed soon. 

[~aw], could you share your thoughts for the ver.14 patch? I plan to send 
another merge vote thread once above two commits get merged, should be tomorrow.

Thanks, 

> Merge branch:YARN-3368 to trunk
> ---
>
> Key: YARN-4734
> URL: https://issues.apache.org/jira/browse/YARN-4734
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-4734.1.patch, YARN-4734.10-NOT_READY.patch, 
> YARN-4734.11-NOT_READY.patch, YARN-4734.12-NOT_READY.patch, 
> YARN-4734.13.patch, YARN-4734.14.patch, YARN-4734.2.patch, YARN-4734.3.patch, 
> YARN-4734.4.patch, YARN-4734.5.patch, YARN-4734.6.patch, YARN-4734.7.patch, 
> YARN-4734.8.patch, YARN-4734.9-NOT_READY.patch
>
>
> YARN-2928 branch is planned to merge back to trunk shortly, it depends on 
> changes of YARN-3368. This JIRA is to track the merging task.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583383#comment-15583383
 ] 

Wangda Tan commented on YARN-2009:
--

Thanks for updating the patch, [~sunilg] and thanks thorough review suggestions 
from [~eepayne]!

For the AM resource usages while doing intra-queue preemption, as of now, we 
don't consider preempt any AM. in another word, these resources are "frozen".

To simply the problem, instead of creating another pool for AM resource, I 
would prefer to deduct them in all calculations.

There're a few of calculations we should look at:
- totalPreemptedResourceAllowed should deduct total AM usages from the queue. 
(ResourceUsage.getAMUsed of queue may not correct since we inc am-used before 
allocating the container, it's better to add all AM resource from running apps.)
- preemtableFromApp should deduct AM usages from each app

I think the solution should work, let me know if I missed anything.

> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5703) ReservationAgents are not correctly configured

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5703:
--
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> ReservationAgents are not correctly configured
> --
>
> Key: YARN-5703
> URL: https://issues.apache.org/jira/browse/YARN-5703
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Po
>Assignee: Sean Po
>
> In AbstractReservationSystem, the method that instantiates a ReservationAgent 
> does not properly initialize it with the appropriate configuration because it 
> expects the ReservationAgent to implement Configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5715) introduce entity prefix for return and sort order

2016-10-17 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583379#comment-15583379
 ] 

Sangjin Lee commented on YARN-5715:
---

Thanks for the patch [~rohithsharma]!

I think one thing we need to be clear we’re on the same page is whether the 
user-provided id prefix should be already inverted so that they should be 
stored as is, or the user provides a more natural value and the storage inverts 
it. I think it’s fine either way, although I do think the latter is slightly 
more user-friendly. What do others think?

Either way, we should be *crystal clear* about this in the javadoc so that 
users do not forget what needs to be done. If we go with the former (the user 
inverts the prefix), users would need to use {{LongConverter.invertLong()}}. It 
may also mean we need to move the {{LongConverter}} class to where 
{{TimelineEntity}} is so that users can use it.

Can we add this to javadoc of {{setIdPrefix()}} so that first, setting it with 
a consistent value is a requirement and second, whatever approach we're going 
with? We should remove the comments on the private {{idPrefix}} variable and 
state it as javadoc on {{setIdPrefix()}}.

(EntityRowKeyPrefix.java)
- l.48: I’m not sure if this second form of the constructor is needed

(GenericEntityReader.java)
- If we’re going to separate the reader work to YARN-5585, is this change 
needed?

(TimelineReaderContext.java)
- if we’re going to separate the reader work to YARN-5585, is this change 
needed?
- I see it use {{Long}} instead of {{long}}. We should use {{long}}.


> introduce entity prefix for return and sort order
> -
>
> Key: YARN-5715
> URL: https://issues.apache.org/jira/browse/YARN-5715
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Rohith Sharma K S
>Priority: Critical
> Attachments: YARN-5715-YARN-5355.01.patch, 
> YARN-5715-YARN-5355.02.patch, YARN-5715-YARN-5355.03.patch
>
>
> While looking into YARN-5585, we have come across the need to provide a sort 
> order different than the current entity id order. The current entity id order 
> returns entities strictly in the lexicographical order, and as such it 
> returns the earliest entities first. This may not be the most natural return 
> order. A more natural return/sort order would be from the most recent 
> entities.
> To solve this, we would like to add what we call the "entity prefix" in the 
> row key for the entity table. It is a number (long) that can be easily 
> provided by the client on write. In the row key, it would be added before the 
> entity id itself.
> The entity prefix would be considered mandatory. On all writes (including 
> updates) the correct entity prefix should be set by the client so that the 
> correct row key is used. The entity prefix needs to be unique only within the 
> scope of the application and the entity type.
> For queries that return a list of entities, the prefix values will be 
> returned along with the entity id's. Queries that specify the prefix and the 
> id should be returned quickly using the row key. If the query omits the 
> prefix but specifies the id (query by id), the query may be less efficient.
> This JIRA should add the entity prefix to the entity API and add its handling 
> to the schema and the write path. The read path will be addressed in 
> YARN-5585.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5433) Audit dependencies for Category-X

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5433:
--
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> Audit dependencies for Category-X
> -
>
> Key: YARN-5433
> URL: https://issues.apache.org/jira/browse/YARN-5433
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sean Busbey
>Priority: Blocker
>
> Recently phoenix has found some category-x dependencies in their build 
> (PHOENIX-3084, PHOENIX-3091), which also showed some problems in HBase 
> (HBASE-16260).
> Since the Timeline Server work brought in both of these as dependencies, we 
> should make sure we don't have any cat-x dependencies either. From what I've 
> seen in those projects, our choice of HBase version shouldn't be impacted but 
> our Phoenix one is.
> Greping our current dependency list for the timeline server component shows 
> some LGPL:
> {code}
> ...
> [INFO]net.sourceforge.findbugs:annotations:jar:1.3.2:compile
> ...
> {code}
> I haven't checked the rest of the dependencies that have changed since 
> HADOOP-12893 went in, so ATM I've filed this against YARN since that's where 
> this one example came in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5719) Enforce a C standard for native container-executor

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5719:
--
Target Version/s: 3.0.0-alpha2  (was: 3.0.0-alpha1)

> Enforce a C standard for native container-executor
> --
>
> Key: YARN-5719
> URL: https://issues.apache.org/jira/browse/YARN-5719
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: nodemanager
>Reporter: Chris Douglas
> Attachments: YARN-5719.000.patch
>
>
> The {{container-executor}} build should declare the C standard it uses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5559) Analyse 2.8.0/3.0.0 jdiff reports and fix any issues

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated YARN-5559:
--
Target Version/s: 2.8.0, 3.0.0-alpha2  (was: 2.8.0, 3.0.0-alpha1)

> Analyse 2.8.0/3.0.0 jdiff reports and fix any issues
> 
>
> Key: YARN-5559
> URL: https://issues.apache.org/jira/browse/YARN-5559
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-5559.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2009) Priority support for preemption in ProportionalCapacityPreemptionPolicy

2016-10-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-2009:
--
Attachment: YARN-2009.0008.patch

Updating new patch with jenkin fix and AM used scenario.

I added the check mentioned by [~eepayne] in 
{{calculateToBePreemptedResourcePerApp}}, but also added back these saved 
resources as pre-emptable resource for next higher priority apps to balance 
out.. Added various tests to cover the same.



> Priority support for preemption in ProportionalCapacityPreemptionPolicy
> ---
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5741) [YARN-3368] Update UI2 documentation for new UI2 path

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583105#comment-15583105
 ] 

Hadoop QA commented on YARN-5741:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
7s {color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s 
{color} | {color:green} YARN-3368 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 20s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e5b18f1 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833782/YARN-5741-YARN-3368.02.patch
 |
| JIRA Issue | YARN-5741 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 0e118b1dfd5d 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / b133ccf |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/13412/artifact/patchprocess/whitespace-eol.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13412/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Update UI2 documentation for new UI2 path
> -
>
> Key: YARN-5741
> URL: https://issues.apache.org/jira/browse/YARN-5741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-5741-YARN-3368.01.patch, 
> YARN-5741-YARN-3368.02.patch
>
>
> This is a followup of YARN-5698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5741) [YARN-3368] Update UI2 documentation for new UI2 path

2016-10-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5741:
-
Attachment: YARN-5741-YARN-3368.02.patch

Thanks [~kaisasak] for working on this, I made some minor edits on your patch.

Now we don't need configs.env any more, and we need turn on few options for 
CORS when running daemons locally. Attached ver.02 patch.

> [YARN-3368] Update UI2 documentation for new UI2 path
> -
>
> Key: YARN-5741
> URL: https://issues.apache.org/jira/browse/YARN-5741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
> Attachments: YARN-5741-YARN-3368.01.patch, 
> YARN-5741-YARN-3368.02.patch
>
>
> This is a followup of YARN-5698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5677) RM should transition to standby when connection is lost for an extended period

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15583049#comment-15583049
 ] 

Hadoop QA commented on YARN-5677:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
15s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
10s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 39m 6s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.8.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 42s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed 
with JDK v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833765/YARN-5677.branch-2.001.patch
 |
| JIRA Issue | YARN-5677 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6eb46e7fe9be 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 7993fb5 |
| 

[jira] [Commented] (YARN-5736) YARN container executor config does not handle white space

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582978#comment-15582978
 ] 

Hadoop QA commented on YARN-5736:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 56s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 10s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833772/YARN-5736.001.patch |
| JIRA Issue | YARN-5736 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 8d747c311f43 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ed9fcbe |
| Default Java | 1.8.0_101 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13411/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13411/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> YARN container executor config does not handle white space
> --
>
> Key: YARN-5736
> URL: https://issues.apache.org/jira/browse/YARN-5736
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
> Attachments: YARN-5736.001.patch, YARN_5736.000.patch
>
>
> The container executor configuration reader does not handle white spaces or 
> malformed key value pairs in the config file correctly or gracefully
> as an example the following key value line which is part of the configuration 
> (note the << is used as a marker to show the extra trailing space):
> yarn.nodemanager.linux-container-executor.group=yarn <<
> is a valid line but when you run the check over the file:
> [root@test]#./container-executor --checksetup
> Can't get group information for yarn - Success.
> [root@test]#
> It fails to find the yarn group but it really tries to find the "yarn " group 
> which fails. There is no trimming anywhere while 

[jira] [Commented] (YARN-5745) [YARN-3368] Disable war packaging under default profile for new YARN UI

2016-10-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582933#comment-15582933
 ] 

Sunil G commented on YARN-5745:
---

Tested with {{mvn package}} command with/without *-Pyarn-ui*. war file is 
generated only when *-Pyarn-ui* is used.
Looks good to me, will wait for jenkins.

> [YARN-3368] Disable war packaging under default profile for new YARN UI
> ---
>
> Key: YARN-5745
> URL: https://issues.apache.org/jira/browse/YARN-5745
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5745-YARN-3368.001.patch
>
>
> Previously we enable war packaging for new UI project by default, which 
> causes all JS code including test code added to war package directly. We 
> should disable this behavior unless profile is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582912#comment-15582912
 ] 

Hadoop QA commented on YARN-5388:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
53s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 3s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 22s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 42s 
{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} branch-2 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 1s {color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.8.0_101 with JDK v1.8.0_101 
generated 5 new + 35 unchanged - 0 fixed = 40 total (was 35) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 21s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 2m 21s {color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.7.0_111 with JDK v1.7.0_111 
generated 5 new + 43 unchanged - 0 fixed = 48 total (was 43) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 52s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed with 
JDK v1.8.0_101. {color} |
| {color:green}+1{color} | 

[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582903#comment-15582903
 ] 

Varun Saxena commented on YARN-5561:


Few nits.
# "Return a set of application-attempts entities" should be "Return a set of 
application-attempt entities" in javadoc
# Return a single application-attempt entity of the given Id => Return a single 
application-attempt entity for the given attempt Id in javadoc
#  Return a set of containers entities belongs to given application attempt =>  
Return a set of container entities belonging to given application attempt in 
javadoc
# Above getContainer method javadoc says " Return a single application-attempt 
entity of the given Id" which is incorrect.

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5745) [YARN-3368] Disable war packaging under default profile for new YARN UI

2016-10-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5745:
-
Attachment: YARN-5745-YARN-3368.001.patch

Attached patch, [~sunilg] plz review. 

> [YARN-3368] Disable war packaging under default profile for new YARN UI
> ---
>
> Key: YARN-5745
> URL: https://issues.apache.org/jira/browse/YARN-5745
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-5745-YARN-3368.001.patch
>
>
> Previously we enable war packaging for new UI project by default, which 
> causes all JS code including test code added to war package directly. We 
> should disable this behavior unless profile is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5745) [YARN-3368] Disable war packaging under default profile for new YARN UI

2016-10-17 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5745:


 Summary: [YARN-3368] Disable war packaging under default profile 
for new YARN UI
 Key: YARN-5745
 URL: https://issues.apache.org/jira/browse/YARN-5745
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wangda Tan
Assignee: Wangda Tan


Previously we enable war packaging for new UI project by default, which causes 
all JS code including test code added to war package directly. We should 
disable this behavior unless profile is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5736) YARN container executor config does not handle white space

2016-10-17 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-5736:
-
Attachment: YARN-5736.001.patch

Fixed Hadoop QA comments

> YARN container executor config does not handle white space
> --
>
> Key: YARN-5736
> URL: https://issues.apache.org/jira/browse/YARN-5736
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Trivial
> Attachments: YARN-5736.001.patch, YARN_5736.000.patch
>
>
> The container executor configuration reader does not handle white spaces or 
> malformed key value pairs in the config file correctly or gracefully
> as an example the following key value line which is part of the configuration 
> (note the << is used as a marker to show the extra trailing space):
> yarn.nodemanager.linux-container-executor.group=yarn <<
> is a valid line but when you run the check over the file:
> [root@test]#./container-executor --checksetup
> Can't get group information for yarn - Success.
> [root@test]#
> It fails to find the yarn group but it really tries to find the "yarn " group 
> which fails. There is no trimming anywhere while processing the lines. If a 
> space would be added in before or after the = sign a failure would also occur.
> Minor nit is the fact that a failure still is logged as a Success



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-17 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582866#comment-15582866
 ] 

Sangjin Lee commented on YARN-5561:
---

[~rohithsharma], could you kindly update v.03 of the patch to address several 
checkstyle issues (the ones other than the number of arguments violations)? 
Then we can go ahead with this. Thanks!

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5744) Update ATSv2 documentation

2016-10-17 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5744?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5744:
---
Description: 
This JIRA can be used to update ATSv2 Documentation.
Current pending tasks are:
1. Document new REST endpoints for app-attempt(s) and container(s).
2. Few of the query params are repeated across endpoints. They can be 
consolidated at one place.

> Update ATSv2 documentation
> --
>
> Key: YARN-5744
> URL: https://issues.apache.org/jira/browse/YARN-5744
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>
> This JIRA can be used to update ATSv2 Documentation.
> Current pending tasks are:
> 1. Document new REST endpoints for app-attempt(s) and container(s).
> 2. Few of the query params are repeated across endpoints. They can be 
> consolidated at one place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582827#comment-15582827
 ] 

Varun Saxena commented on YARN-5561:


That should be fine. We can keep a JIRA open till next drop on trunk and update 
documentation via it just like we did before first drop on trunk. There can be 
multiple contributors to the JIRA.

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5744) Update ATSv2 documentation

2016-10-17 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-5744:
--

 Summary: Update ATSv2 documentation
 Key: YARN-5744
 URL: https://issues.apache.org/jira/browse/YARN-5744
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Varun Saxena






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5743) [Atsv2] Publish queue name and RMAppMetrics to ATS

2016-10-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582772#comment-15582772
 ] 

Varun Saxena commented on YARN-5743:


By the way we should probably publish log aggregation status and diagnostics 
too. This will help, if user cannot find the application logs in designated 
location due to some failure during aggregation.

> [Atsv2] Publish queue name and RMAppMetrics to ATS
> --
>
> Key: YARN-5743
> URL: https://issues.apache.org/jira/browse/YARN-5743
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5743.patch
>
>
> App queue name is missed to publish to ATSv2. 
> And RMAppMetrcs publish only cpu and memory. There are many more things to 
> publish from app metrics such as 
>  resourcePreempted;
>  numNonAMContainersPreempted;
> numAMContainersPreempted.
> And RMAppMetrics need to be published to App metrics rather than info. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5677) RM should transition to standby when connection is lost for an extended period

2016-10-17 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5677:
---
Attachment: YARN-5677.branch-2.001.patch

Here's a branch-2 patch that adds explicit casts to get around the issue.

> RM should transition to standby when connection is lost for an extended period
> --
>
> Key: YARN-5677
> URL: https://issues.apache.org/jira/browse/YARN-5677
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5677.001.patch, YARN-5677.002.patch, 
> YARN-5677.003.patch, YARN-5677.004.patch, YARN-5677.005.patch, 
> YARN-5677.branch-2.001.patch
>
>
> In trunk, there is no maximum number of retries that I see.  It appears the 
> connection will be retried forever, with the active never figuring out it's 
> no longer active.  In my testing, the active-active state lasted almost 2 
> hours with no sign of stopping before I killed it.  The solution appears to 
> be to cap the number of retries or amount of time spent retrying.
> This issue is significant because of the asynchronous nature of job 
> submission.  If the active doesn't know it's not active, it will buffer up 
> job submissions until it finally realizes it has become the standby. Then it 
> will fail all the job submissions in bulk. In high-volume workflows, that 
> behavior can create huge mass job failures.
> This issue is also important because the node managers will not fail over to 
> the new active until the old active realizes it's the standby.  Workloads 
> submitted after the old active loses contact with ZK will therefore fail to 
> be executed regardless of which RM the clients contact.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5743) [Atsv2] Publish queue name and RMAppMetrics to ATS

2016-10-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582704#comment-15582704
 ] 

Varun Saxena commented on YARN-5743:


Thanks [~rohithsharma] for the patch.
Overall the patch looks good.

# Suffixing each metric with _METRIC is not required as we are storing them as 
metric, so its self evident. Its just few extra unnecessary bytes being stored.
# Cosmetic comment - Maybe we can make the formatting of statements inside 
getTimelinelineAppMetrics consistent.



> [Atsv2] Publish queue name and RMAppMetrics to ATS
> --
>
> Key: YARN-5743
> URL: https://issues.apache.org/jira/browse/YARN-5743
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5743.patch
>
>
> App queue name is missed to publish to ATSv2. 
> And RMAppMetrcs publish only cpu and memory. There are many more things to 
> publish from app metrics such as 
>  resourcePreempted;
>  numNonAMContainersPreempted;
> numAMContainersPreempted.
> And RMAppMetrics need to be published to App metrics rather than info. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5388) MAPREDUCE-6719 requires changes to DockerContainerExecutor

2016-10-17 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-5388:
---
Attachment: YARN-5388.branch-2.003.patch
YARN-5388.003.patch

Thanks for the review, [~sidharta-s].  I removed the docs from trunk and 
slapped a big deprecation warning on the docs in branch-2.  Since there aren't 
yet docs on using Docker with the LCE, it seemed cruel to strip the DCE docs 
out entirely.  (See YARN-5258.)

> MAPREDUCE-6719 requires changes to DockerContainerExecutor
> --
>
> Key: YARN-5388
> URL: https://issues.apache.org/jira/browse/YARN-5388
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Fix For: 2.9.0
>
> Attachments: YARN-5388.001.patch, YARN-5388.002.patch, 
> YARN-5388.003.patch, YARN-5388.branch-2.001.patch, 
> YARN-5388.branch-2.002.patch, YARN-5388.branch-2.003.patch
>
>
> Because the {{DockerContainerExecuter}} overrides the {{writeLaunchEnv()}} 
> method, it must also have the wildcard processing logic from 
> YARN-4958/YARN-5373 added to it.  Without it, the use of -libjars will fail 
> unless wildcarding is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5679) TestAHSWebServices is failing

2016-10-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582653#comment-15582653
 ] 

Miklos Szegedi commented on YARN-5679:
--

+1 (Non binding) Thank you [~ajisakaa]!

> TestAHSWebServices is failing
> -
>
> Key: YARN-5679
> URL: https://issues.apache.org/jira/browse/YARN-5679
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
> Attachments: YARN-5679.01.patch, YARN-5679.02.patch
>
>
> TestAHSWebServices.testContainerLogsForFinishedApps is failing.
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/176/testReport/
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.createContainerLogInLocalDir(TestAHSWebServices.java:675)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.testContainerLogsForFinishedApps(TestAHSWebServices.java:581)
> {noformat}
> {noformat}
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices.testContainerLogsForFinishedApps(TestAHSWebServices.java:519)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5721) NPE at AMRMClientImpl.getMatchingRequests

2016-10-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582577#comment-15582577
 ] 

ASF GitHub Bot commented on YARN-5721:
--

Github user zzvara commented on the issue:

https://github.com/apache/hadoop/pull/139
  
Looks good and fixes the problem.


> NPE at AMRMClientImpl.getMatchingRequests
> -
>
> Key: YARN-5721
> URL: https://issues.apache.org/jira/browse/YARN-5721
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
> Environment: Tested on Windows 10, in Dockerized Linux containers & 
> Ubuntu 16.04 with Java 7, Java 8.
>Reporter: Zoltán Zvara
>Priority: Blocker
>
> The following NPE was thrown using a Spark 2.1.0-SNAPSHOT (as client) by 
> changing Hadoop dependency to the latest (by the time the ERROR has been 
> generated).
> {{2016-10-10 11:33:53,392 ERROR yarn.ApplicationMaster: Uncaught exception: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.getMatchingRequests(AMRMClientImpl.java:668)
> at 
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.getMatchingRequests(AMRMClientImpl.java:651)
>   at 
> org.apache.spark.deploy.yarn.YarnAllocator.getPendingAtLocation(YarnAllocator.scala:210)
>   at 
> org.apache.spark.deploy.yarn.YarnAllocator.getPendingAllocate(YarnAllocator.scala:203)
>   at 
> org.apache.spark.deploy.yarn.YarnAllocator.updateResourceRequests(YarnAllocator.scala:318)
>   at 
> org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:278)
>   at 
> org.apache.spark.deploy.yarn.ApplicationMaster.registerAM(ApplicationMaster.scala:350)
>   at 
> org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:418)
>   at 
> org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:250)}}
> We've also pulled the latest code (1 hour ago) from the repository, and ran a 
> test for {{getMatchingRequests}}. Same NPE has been encountered.
> {{getMatchingRequests}} should never throw an NPE even if it has been called 
> right after the client has been started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5743) [Atsv2] Publish queue name and RMAppMetrics to ATS

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582453#comment-15582453
 ] 

Hadoop QA commented on YARN-5743:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 45m 23s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833730/0001-YARN-5743.patch |
| JIRA Issue | YARN-5743 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 596e3d8fe53a 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ed9fcbe |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-YARN-Build/13407/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13407/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server |
| Console output | 

[jira] [Updated] (YARN-5743) [Atsv2] Publish queue name and RMAppMetrics to ATS

2016-10-17 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5743:

Attachment: 0001-YARN-5743.patch

Update the patch for publishing queue name and RMAppMetrics

> [Atsv2] Publish queue name and RMAppMetrics to ATS
> --
>
> Key: YARN-5743
> URL: https://issues.apache.org/jira/browse/YARN-5743
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5743.patch
>
>
> App queue name is missed to publish to ATSv2. 
> And RMAppMetrcs publish only cpu and memory. There are many more things to 
> publish from app metrics such as 
>  resourcePreempted;
>  numNonAMContainersPreempted;
> numAMContainersPreempted.
> And RMAppMetrics need to be published to App metrics rather than info. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5743) [Atsv2] Publish queue name and RMAppMetrics to ATS

2016-10-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582172#comment-15582172
 ] 

Varun Saxena commented on YARN-5743:


Thanks [~rohithsharma] for filing the JIRA. All of this makes sense. Capturing 
this information as metrics will help in this being aggregated upto flow run 
and flow level as well.
Also, we can make sure that the information which we serve from RM/NM REST 
endpoints currently can be published and hence served from ATS too (if the 
information makes sense as historical data).

> [Atsv2] Publish queue name and RMAppMetrics to ATS
> --
>
> Key: YARN-5743
> URL: https://issues.apache.org/jira/browse/YARN-5743
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>
> App queue name is missed to publish to ATSv2. 
> And RMAppMetrcs publish only cpu and memory. There are many more things to 
> publish from app metrics such as 
>  resourcePreempted;
>  numNonAMContainersPreempted;
> numAMContainersPreempted.
> And RMAppMetrics need to be published to App metrics rather than info. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582088#comment-15582088
 ] 

Rohith Sharma K S commented on YARN-5561:
-

I would prefer to take it up as separate JIRA as consolidated documentation 
update. And also I suspect and pretty sure, there are couple of more changes 
will come which require to modify documentations. 

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-10-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582078#comment-15582078
 ] 

Rohith Sharma K S commented on YARN-5611:
-


bq. The appIdToTimeoutTypeMapping may be not needed.
I added this for couple of reasons. 
# Caching, need not to create RMAppToMonitor object on every update or on 
getRemainingTime or any other.
# Since, YARN may support more timeout for an application, it would help to 
track what all are the timeouts for an application has been registered. Say 
app1 might want only lifetime, app2 might want lifetime & queue_time. In such 
cases, it is easier to look up this mapping and get registered timeouts for an 
application. 

bq. Also, when we update the timeout, the new timeout should be current 
timestamp + newTimeout value. Later, we will also send the remaining lifetime 
to user if user queries, this way, it's easier to reason - what user sets as 
the timeout value is what user will get when he queries.
Good point. let me change it. Note: as of now, we are not storing timeout 
values in statestore apart from submissionContext. Submission context will 
contains only timeout which is not absolute. But, currently we are supporting 
only lifetime, then we can recover on RM restart. So, in future if any such use 
case for supporting different timeouts then for RM HA cases, we need to recover 
either monitoringStartTime or EndTime.

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582036#comment-15582036
 ] 

Varun Saxena commented on YARN-5561:


[~rohithsharma], should we update documentation for this ? 
We can also do it in a separate JIRA though because some of the query params 
seem to be repeated in the documentation. Probably we can consolidate these 
query params at a single place in that JIRA and add these endpoints as well. Or 
you can add the endpoints here itself. As you wish.

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5743) [Atsv2] Publish queue name and RMAppMetrics to ATS

2016-10-17 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-5743:
---

 Summary: [Atsv2] Publish queue name and RMAppMetrics to ATS
 Key: YARN-5743
 URL: https://issues.apache.org/jira/browse/YARN-5743
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Rohith Sharma K S
Assignee: Rohith Sharma K S


App queue name is missed to publish to ATSv2. 
And RMAppMetrcs publish only cpu and memory. There are many more things to 
publish from app metrics such as 
 resourcePreempted;
 numNonAMContainersPreempted;
numAMContainersPreempted.

And RMAppMetrics need to be published to App metrics rather than info. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582058#comment-15582058
 ] 

Hadoop QA commented on YARN-5145:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 1m 1s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b17 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12833721/YARN-5145-YARN-3368.06.patch
 |
| JIRA Issue | YARN-5145 |
| Optional Tests |  asflicense  |
| uname | Linux a87496b09e01 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-3368 / 164a3c2 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13406/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> YARN-5145-YARN-3368.03.patch, YARN-5145-YARN-3368.04.patch, 
> YARN-5145-YARN-3368.05.patch, YARN-5145-YARN-3368.06.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5145) [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR

2016-10-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-5145:
--
Attachment: YARN-5145-YARN-3368.06.patch

Attaching new patch to handle timeline address configuration from 
{{configs.env}}

> [YARN-3368] Move new YARN UI configuration to HADOOP_CONF_DIR
> -
>
> Key: YARN-5145
> URL: https://issues.apache.org/jira/browse/YARN-5145
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Sunil G
> Attachments: 0001-YARN-5145-Run-NewUI-WithOldPort-POC.patch, 
> YARN-5145-YARN-3368.01.patch, YARN-5145-YARN-3368.02.patch, 
> YARN-5145-YARN-3368.03.patch, YARN-5145-YARN-3368.04.patch, 
> YARN-5145-YARN-3368.05.patch, YARN-5145-YARN-3368.06.patch, 
> newUIInOldRMWebServer.png
>
>
> Existing YARN UI configuration is under Hadoop package's directory: 
> $HADOOP_PREFIX/share/hadoop/yarn/webapps/, we should move it to 
> $HADOOP_CONF_DIR like other configurations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2016-10-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582025#comment-15582025
 ] 

Rohith Sharma K S commented on YARN-5742:
-

thanks a lot.. 

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2016-10-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582021#comment-15582021
 ] 

Varun Saxena commented on YARN-5742:


Ok...Take it up.

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2016-10-17 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5742:
---
Assignee: Rohith Sharma K S  (was: Varun Saxena)

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Rohith Sharma K S
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2016-10-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582016#comment-15582016
 ] 

Rohith Sharma K S commented on YARN-5742:
-

Thanks Varun for creating JIRA. I have done little bit of progress on analysis 
and started coding a bit.  
Would you mind if work on this ? 

> Serve aggregated logs of historical apps from timeline service
> --
>
> Key: YARN-5742
> URL: https://issues.apache.org/jira/browse/YARN-5742
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5742) Serve aggregated logs of historical apps from timeline service

2016-10-17 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-5742:
--

 Summary: Serve aggregated logs of historical apps from timeline 
service
 Key: YARN-5742
 URL: https://issues.apache.org/jira/browse/YARN-5742
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Varun Saxena
Assignee: Varun Saxena






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15581949#comment-15581949
 ] 

Varun Saxena commented on YARN-5561:


Yeah what I meant is a separate endpoint may be required for things like that. 
Not sure what we name it though. Or we can just put everything under the hood 
of ws/v2/timeline. Anyways maybe a separate JIRA should be filed for that 
disucssion. 

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3053) [Security] Review and implement security in ATS v.2

2016-10-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15581825#comment-15581825
 ] 

Varun Saxena commented on YARN-3053:


This will be taken care by TimelineClient in the same manner as is done for 
ATSv1 currently. Plan to reuse the available functionality
We will use kerberos auth + DT for secure access.

Look at DelegationTokenAuthenticatedURL and TimelineAuthenticationFilter 
classes.

bq. But If user has custom web authentication which always to provide username 
and password
Sorry didn't get you. You mean anything other than kerberos authentication ?

> [Security] Review and implement security in ATS v.2
> ---
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3053) [Security] Review and implement security in ATS v.2

2016-10-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15581449#comment-15581449
 ] 

Rohith Sharma K S commented on YARN-3053:
-

I have basic question on ATSv2 security model. ATSv2 claims that all the 
communication is based on REST end points. How does it solve custom web 
authentication issue invoked from CLI commands? Let say, in YARN 
ApplicationClientProtocol is RPC based API and ApplicationCLI make use of 
getting the reports and other stuff. This was secured communication by doing 
kinit . 

Now, If same ApplicationCLI want to get application report from ATSv2, then 
expected to invoke REST call to ATSv2 for application report. But If user has 
custom web authentication which always to provide username and password then 
how does ATSv2 guarantee security for this?

> [Security] Review and implement security in ATS v.2
> ---
>
> Key: YARN-3053
> URL: https://issues.apache.org/jira/browse/YARN-3053
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: ATSv2Authentication(draft).pdf
>
>
> Per design in YARN-2928, we want to evaluate and review the system for 
> security, and ensure proper security in the system.
> This includes proper authentication, token management, access control, and 
> any other relevant security aspects.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5561) [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and entities via REST

2016-10-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15581429#comment-15581429
 ] 

Rohith Sharma K S commented on YARN-5561:
-

Log serving we can launch in separate REST namespace with in the 
TimelineReader. May it so called as YarnTimelineUtilService. 

> [Atsv2] : Support for ability to retrieve apps/app-attempt/containers and 
> entities via REST
> ---
>
> Key: YARN-5561
> URL: https://issues.apache.org/jira/browse/YARN-5561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-5561.YARN-5355.patch, YARN-5561.02.patch, 
> YARN-5561.03.patch, YARN-5561.patch, YARN-5561.v0.patch
>
>
> ATSv2 model lacks retrieval of {{list-of-all-apps}}, 
> {{list-of-all-app-attempts}} and {{list-of-all-containers-per-attempt}} via 
> REST API's. And also it is required to know about all the entities in an 
> applications.
> It is pretty much highly required these URLs for Web  UI.
> New REST URL would be 
> # GET {{/ws/v2/timeline/apps}}
> # GET {{/ws/v2/timeline/apps/\{app-id\}/appattempts}}.
> # GET 
> {{/ws/v2/timeline/apps/\{app-id\}/appattempts/\{attempt-id\}/containers}}
> # GET {{/ws/v2/timeline/apps/\{app id\}/entities}} should display list of 
> entities that can be queried.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5705) [YARN-3368] Add support for Timeline V2 to new web UI

2016-10-17 Thread Akhil PB (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akhil PB updated YARN-5705:
---
Attachment: YARN-5705.006.patch

> [YARN-3368] Add support for Timeline V2 to new web UI
> -
>
> Key: YARN-5705
> URL: https://issues.apache.org/jira/browse/YARN-5705
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Sunil G
>Assignee: Akhil PB
> Attachments: YARN-5705.001.patch, YARN-5705.002.patch, 
> YARN-5705.003.patch, YARN-5705.004.patch, YARN-5705.005.patch, 
> YARN-5705.006.patch
>
>
> Integrate timeline v2 to YARN-3368. This is a clone JIRA for YARN-4097



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org