[jira] [Commented] (YARN-5354) TestDistributedShell.checkTimelineV2() may fail for concurrent tests

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372194#comment-15372194
 ] 

Hadoop QA commented on YARN-5354:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 0 new + 78 unchanged - 1 fixed = 78 total (was 79) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 53s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 2s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817334/YARN-5354.03.patch |
| JIRA Issue | YARN-5354 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 34fc7c068c7c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f292624 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12283/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12283/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestDistributedShell.checkTimelineV2() may fail for concurrent tests
> 
>
> Key: YARN-5354
> URL: https://issues.apache.org/jira/browse/YARN-5354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: 

[jira] [Updated] (YARN-5354) TestDistributedShell.checkTimelineV2() may fail for concurrent tests

2016-07-11 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5354:
--
Attachment: YARN-5354.03.patch

Posted patch v.3.

Removed code that deletes the directory.

> TestDistributedShell.checkTimelineV2() may fail for concurrent tests
> 
>
> Key: YARN-5354
> URL: https://issues.apache.org/jira/browse/YARN-5354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: YARN-5354.01.patch, YARN-5354.02.patch, 
> YARN-5354.03.patch
>
>
> {{TestDistributedShell.checkTimelineV2()}} uses the default (hard-coded) 
> storage root directory. This is brittle against concurrent tests. We should 
> use a unique storage directory for the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5354) TestDistributedShell.checkTimelineV2() may fail for concurrent tests

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372161#comment-15372161
 ] 

Hadoop QA commented on YARN-5354:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 0 new + 78 unchanged - 1 fixed = 78 total (was 79) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 40s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 36s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817329/YARN-5354.02.patch |
| JIRA Issue | YARN-5354 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d36eaa882fe4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f292624 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12282/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12282/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestDistributedShell.checkTimelineV2() may fail for concurrent tests
> 
>
> Key: YARN-5354
> URL: https://issues.apache.org/jira/browse/YARN-5354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: 

[jira] [Updated] (YARN-5354) TestDistributedShell.checkTimelineV2() may fail for concurrent tests

2016-07-11 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5354:
--
Attachment: YARN-5354.02.patch

That's a good suggestion. I've updated the patch to use {{TemporaryFolder}} 
instead.

> TestDistributedShell.checkTimelineV2() may fail for concurrent tests
> 
>
> Key: YARN-5354
> URL: https://issues.apache.org/jira/browse/YARN-5354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: YARN-5354.01.patch, YARN-5354.02.patch
>
>
> {{TestDistributedShell.checkTimelineV2()}} uses the default (hard-coded) 
> storage root directory. This is brittle against concurrent tests. We should 
> use a unique storage directory for the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-11 Thread Ying Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ying Zhang updated YARN-5287:
-
Comment: was deleted

(was: I have a simple patch here. I've tested it with my 3-node cluster. It 
works as expected.
What the patch does is explicitly setting the required permission for each 
newly created local directory.
Would someone please review the fix and see if there is anything missing? Thank 
you:-))

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287.001.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372119#comment-15372119
 ] 

Hadoop QA commented on YARN-5287:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 54s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 52s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817322/YARN-5287.001.patch |
| JIRA Issue | YARN-5287 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux b06dc3eab25c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f292624 |
| Default Java | 1.8.0_91 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12281/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12281/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287.001.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Updated] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-11 Thread Ying Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ying Zhang updated YARN-5287:
-
Attachment: YARN-5287.001.patch

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287.001.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-11 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372083#comment-15372083
 ] 

Ying Zhang commented on YARN-5287:
--

I have a simple patch here. I've tested it with my 3-node cluster. It works as 
expected.
What the patch does is explicitly setting the required permission for each 
newly created local directory.
Would someone please review the fix and see if there is anything missing? Thank 
you:-)

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Priority: Minor
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-11 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372080#comment-15372080
 ] 

Ying Zhang commented on YARN-5287:
--

Way to reproduce:
1. configure the cluster with umask 077 for all nodes
(for example, modify the /etc/profile and /etc/bashrc)
2. enable LinuxContainerExecutor through Ambari or config file. Would also need 
to set "yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users" 
to false under non-secure mode.
3. restart all affected
4. run a simple MR job

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Priority: Minor
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-11 Thread Ying Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ying Zhang updated YARN-5287:
-
Description: 
LinuxContainerExecutor fails to set the proper permissions on the local 
directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
to the following reason:
Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
permission 700 but needs permission 750


  was:
LinuxContainerExecutor fails to set the proper permissions on the local 
directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
to the following reason:
Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
permission 700 but needs permission 750

Way to reproduce:
1. configure the cluster with umask 077
(for example, modify the /etc/profile and /etc/bashrc)
2. enable LinuxContainerExecutor through Ambari or config file
3. restart all affected
4. run a simple MR job



> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Priority: Minor
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5265) Make HBase configuration for the timeline service configurable

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372061#comment-15372061
 ] 

Hadoop QA commented on YARN-5265:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 6m 55s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
21s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 21s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 0s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
9s {color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
59s {color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s 
{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
55s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 12s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 46s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 7s {color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun |
|   | 

[jira] [Commented] (YARN-5354) TestDistributedShell.checkTimelineV2() may fail for concurrent tests

2016-07-11 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372027#comment-15372027
 ] 

Joep Rottinghuis commented on YARN-5354:


This should work, but why not use
{code}
  @Rule
  public final TemporaryFolder folder = new TemporaryFolder();
{code}
Or name it timelineV2StorageDir. 
That seems to be the pattern used in my unit test. Has the advantage that you 
don't have to write cleanup code in your tearDown.

> TestDistributedShell.checkTimelineV2() may fail for concurrent tests
> 
>
> Key: YARN-5354
> URL: https://issues.apache.org/jira/browse/YARN-5354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: YARN-5354.01.patch
>
>
> {{TestDistributedShell.checkTimelineV2()}} uses the default (hard-coded) 
> storage root directory. This is brittle against concurrent tests. We should 
> use a unique storage directory for the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5200) Improve yarn logs to get Container List

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15372022#comment-15372022
 ] 

Hadoop QA commented on YARN-5200:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 37s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 28s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 14 
new + 91 unchanged - 13 fixed = 105 total (was 104) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 7s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 39s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 26s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 4s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
|  |  Format string should use %n rather than n in 
org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAContainerLogsForLogType(ContainerLogsRequest,
 boolean)  At LogCLIHelpers.java:rather than n in 
org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAContainerLogsForLogType(ContainerLogsRequest,
 boolean)  At LogCLIHelpers.java:[line 160] |
|  |  Format string should use %n rather than n in 
org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAContainerLogsForLogTypeWithoutNodeId(ContainerLogsRequest)
  At LogCLIHelpers.java:rather than n in 
org.apache.hadoop.yarn.logaggregation.LogCLIHelpers.dumpAContainerLogsForLogTypeWithoutNodeId(ContainerLogsRequest)
  At 

[jira] [Updated] (YARN-5265) Make HBase configuration for the timeline service configurable

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5265:
---
Attachment: YARN-5265-YARN-5355.06.patch

Path on new branch YARN-5355 with findbugs and checkstyle issues addressed.

> Make HBase configuration for the timeline service configurable
> --
>
> Key: YARN-5265
> URL: https://issues.apache.org/jira/browse/YARN-5265
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
>  Labels: YARN-5355
> Attachments: ATS v2 cluster deployment v1.png, 
> YARN-5265-YARN-2928.01.patch, YARN-5265-YARN-2928.02.patch, 
> YARN-5265-YARN-2928.03.patch, YARN-5265-YARN-2928.04.patch, 
> YARN-5265-YARN-2928.05.patch, YARN-5265-YARN-5355.06.patch
>
>
> Currently we create "default" HBase configurations, this works as long as the 
> user places the appropriate configuration on the classpath.
> This works fine for a standalone Hadoop cluster.
> However, if a user wants to monitor an HBase cluster and has a separate ATS 
> HBase cluster, then it can become tricky to create the right classpath for 
> the nodemanagers and still have tasks have their separate configs.
> It will be much easier to add a yarn configuration to let cluster admins 
> configure which HBase to connect to to write ATS metrics to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5200) Improve yarn logs to get Container List

2016-07-11 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5200:

Attachment: YARN-5200.9.patch

> Improve yarn logs to get Container List
> ---
>
> Key: YARN-5200
> URL: https://issues.apache.org/jira/browse/YARN-5200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5200.1.patch, YARN-5200.2.patch, YARN-5200.3.patch, 
> YARN-5200.4.patch, YARN-5200.5.patch, YARN-5200.6.patch, YARN-5200.7.patch, 
> YARN-5200.8.patch, YARN-5200.9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5200) Improve yarn logs to get Container List

2016-07-11 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371973#comment-15371973
 ] 

Xuan Gong commented on YARN-5200:
-

Thanks for the review.
bq. The following incorrectly shows information about all containers instead of 
only one
{code}
$HADOOP_YARN_HOME/bin/yarn logs -applicationId application_1468006306667_0004 
-show_container_log_info container_1468006306667_0004_01_01
{code}

Looks like this is not the correct way to use this option. The right way could 
be
{code}
To get all container log meta for the specific application:
yarn logs -applicationId application_1468006306667_0004 
-show_container_log_info 

To get the specific container log meta for the application:
yarn logs -applicationId application_1468006306667_0004 -containerId 
container_1468006306667_0004_01_01 -show_container_log_info

To get the container log meta for the containers which ran on a specific NM:
yarn logs -applicationId application_1468006306667_0004 -nodeAddress ${nodeId} 
-show_container_log_info
{code}

bq. The above command shouldn't force us to pass both applicationID and 
containerID (continuation of YARN-5227)

The commandline:
{code}
yarn logs  -containerId container_1468006306667_0004_01_01 
-show_container_log_info
{code}
works, and it would output the log meta for this container.

Attached a new patch which addressed all other comments

> Improve yarn logs to get Container List
> ---
>
> Key: YARN-5200
> URL: https://issues.apache.org/jira/browse/YARN-5200
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5200.1.patch, YARN-5200.2.patch, YARN-5200.3.patch, 
> YARN-5200.4.patch, YARN-5200.5.patch, YARN-5200.6.patch, YARN-5200.7.patch, 
> YARN-5200.8.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5179) Issue of CPU usage of containers

2016-07-11 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371957#comment-15371957
 ] 

Inigo Goiri commented on YARN-5179:
---

[~asuresh], given that YARN-5117 acknowledges that there is a problem with the 
way we calculated the CPU usage, I agree with [~Zhongkai] that we should 
revisit the way that milliVCores is computed in {{ContainersMonitorImpl}}. 
[~Zhongkai], could you upload a patch with your proposal to see if it makes 
sense?

> Issue of CPU usage of containers
> 
>
> Key: YARN-5179
> URL: https://issues.apache.org/jira/browse/YARN-5179
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.0
> Environment: Both on Windows and Linux
>Reporter: Zhongkai Mi
>
> // Multiply by 1000 to avoid losing data when converting to int 
>int milliVcoresUsed = (int) (cpuUsageTotalCoresPercentage * 1000 
>   * maxVCoresAllottedForContainers /nodeCpuPercentageForYARN); 
> This formula will not get right CPU usage based vcore if vcores != physical 
> cores. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5356) ResourceUtilization should also include resource availability

2016-07-11 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371947#comment-15371947
 ] 

Inigo Goiri edited comment on YARN-5356 at 7/12/16 12:15 AM:
-

In general, we have 3 values:
* Actual resources of the full machine. This currently comes from 
{{NodeManagerHardwareUtils}} if I remember correctly. For example, it can be 12 
cores.
* Resource available for the Node Manager. This is currently defined in 
yarn-site.xml with key {{yarn.nodemanager.resource.cpu-vcores}} or with the 
{{updateNodeResource()}}. For example, 6 cores.
* Actual utilization of the machine. This is extracted in the 
{{NodeResourceMonitor}} with the {{ResourceCalculatorPlugin}}. And it can be 
400%, which would imply 4 out of the 12 cores used.

[~nroberts], I understand that your problem is that with the current approach 
you know that you have 6 cores available to the NM and 4 of them are used. 
However, the machine is not that utilized (~30%). Correct? In that case, we 
would only need to report the actual size of the machine at registration time 
as it would never change. Not sure that {{ResourceUtilization}} would be the 
right place for that as it would be reported in every heartbeat continuously.



was (Author: elgoiri):
In general, we have 3 values:
* Actual resources of the full machine. This currently comes from 
{{NodeManagerHardwareUtils}} if I remember correctly. For example, it can be 12 
cores for example
* Resource available for the Node Manager. This is currently defined in 
yarn-site.xml with key {{yarn.nodemanager.resource.cpu-vcores}} or with the 
{{updateNodeResource()}}. For example, 6 cores.
* Actual utilization of the machine. This is extracted in the 
{{NodeResourceMonitor}} with the {{ResourceCalculatorPlugin}}. And it can be 
400%, which would imply 4 out of the 12 cores used.

[~nroberts], I understand that your problem is that with the current approach 
you know that you have 6 cores available to the NM and 4 of them are used. 
However, the machine is not that utilized (~30%). Correct? In that case, we 
would only need to report the actual size of the machine at registration time 
as it would never change. Not sure that {{ResourceUtilization}} would be the 
right place for that as it would be reported in every heartbeat continuously.


> ResourceUtilization should also include resource availability
> -
>
> Key: YARN-5356
> URL: https://issues.apache.org/jira/browse/YARN-5356
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Nathan Roberts
>
> Currently ResourceUtilization contains absolute quantities of resource used 
> (e.g. 4096MB memory used). It would be good if it also included how much of 
> that resource is actually available on the node so that the RM can use this 
> data to schedule more effectively (overcommit, etc)
> Currently the only available information is the Resource the node registered 
> with (or later updated using updateNodeResource). However, these aren't 
> really sufficient to get a good view of how utilized a resource is. For 
> example, if a node reports 400% CPU utilization, does that mean it's 
> completely full, or barely utilized? Today there is no reliable way to figure 
> this out.
> [~elgoiri] - Lots of good work is happening in YARN-2965 so curious if you 
> have thoughts/opinions on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5356) ResourceUtilization should also include resource availability

2016-07-11 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371947#comment-15371947
 ] 

Inigo Goiri commented on YARN-5356:
---

In general, we have 3 values:
* Actual resources of the full machine. This currently comes from 
{{NodeManagerHardwareUtils}} if I remember correctly. For example, it can be 12 
cores for example
* Resource available for the Node Manager. This is currently defined in 
yarn-site.xml with key {{yarn.nodemanager.resource.cpu-vcores}} or with the 
{{updateNodeResource()}}. For example, 6 cores.
* Actual utilization of the machine. This is extracted in the 
{{NodeResourceMonitor}} with the {{ResourceCalculatorPlugin}}. And it can be 
400%, which would imply 4 out of the 12 cores used.

[~nroberts], I understand that your problem is that with the current approach 
you know that you have 6 cores available to the NM and 4 of them are used. 
However, the machine is not that utilized (~30%). Correct? In that case, we 
would only need to report the actual size of the machine at registration time 
as it would never change. Not sure that {{ResourceUtilization}} would be the 
right place for that as it would be reported in every heartbeat continuously.


> ResourceUtilization should also include resource availability
> -
>
> Key: YARN-5356
> URL: https://issues.apache.org/jira/browse/YARN-5356
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Nathan Roberts
>
> Currently ResourceUtilization contains absolute quantities of resource used 
> (e.g. 4096MB memory used). It would be good if it also included how much of 
> that resource is actually available on the node so that the RM can use this 
> data to schedule more effectively (overcommit, etc)
> Currently the only available information is the Resource the node registered 
> with (or later updated using updateNodeResource). However, these aren't 
> really sufficient to get a good view of how utilized a resource is. For 
> example, if a node reports 400% CPU utilization, does that mean it's 
> completely full, or barely utilized? Today there is no reliable way to figure 
> this out.
> [~elgoiri] - Lots of good work is happening in YARN-2965 so curious if you 
> have thoughts/opinions on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5280) Allow YARN containers to run with Java Security Manager

2016-07-11 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371815#comment-15371815
 ] 

Larry McCay commented on YARN-5280:
---

bq. In order to prevent users from granting themselves excess permissions this 
would likely need to take the form of server side configurations.

To clarify, the idea isn't so that applications would grant themselves 
permissions but instead declare the required permissions for the application. 
This allows for deployment time failure as apposed to runtime failure when a 
privileged action is attempted and fails. Of course, there is nothing stating 
that there couldn't be server side configuration to allow for a minimum set of 
permissions and some room for certain permissions that can be granted upon 
demand. In general, it would be expected that it would be a deploy time compare 
of those permissions required for deployment and those being granted by the 
container policy in server config.

The jar signing subtasks certainly seem appropriate. I would still like to hear 
the driving usecase/s and how many folks actually need it.

> Allow YARN containers to run with Java Security Manager
> ---
>
> Key: YARN-5280
> URL: https://issues.apache.org/jira/browse/YARN-5280
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 2.6.4
>Reporter: Greg Phillips
>Priority: Minor
> Attachments: YARN-5280.patch, YARNContainerSandbox.pdf
>
>
> YARN applications have the ability to perform privileged actions which have 
> the potential to add instability into the cluster. The Java Security Manager 
> can be used to prevent users from running privileged actions while still 
> allowing their core data processing use cases. 
> Introduce a YARN flag which will allow a Hadoop administrator to enable the 
> Java Security Manager for user code, while still providing complete 
> permissions to core Hadoop libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5354) TestDistributedShell.checkTimelineV2() may fail for concurrent tests

2016-07-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371770#comment-15371770
 ] 

Sangjin Lee commented on YARN-5354:
---

I would greatly appreciate your review. Thanks!

> TestDistributedShell.checkTimelineV2() may fail for concurrent tests
> 
>
> Key: YARN-5354
> URL: https://issues.apache.org/jira/browse/YARN-5354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: YARN-5354.01.patch
>
>
> {{TestDistributedShell.checkTimelineV2()}} uses the default (hard-coded) 
> storage root directory. This is brittle against concurrent tests. We should 
> use a unique storage directory for the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5354) TestDistributedShell.checkTimelineV2() may fail for concurrent tests

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371763#comment-15371763
 ] 

Hadoop QA commented on YARN-5354:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 0 new + 78 unchanged - 1 fixed = 78 total (was 79) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 26s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 29s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817263/YARN-5354.01.patch |
| JIRA Issue | YARN-5354 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux af42d504b088 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0fd3980 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12278/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12278/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestDistributedShell.checkTimelineV2() may fail for concurrent tests
> 
>
> Key: YARN-5354
> URL: https://issues.apache.org/jira/browse/YARN-5354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: 

[jira] [Commented] (YARN-5354) TestDistributedShell.checkTimelineV2() may fail for concurrent tests

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371756#comment-15371756
 ] 

Hadoop QA commented on YARN-5354:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell:
 The patch generated 0 new + 78 unchanged - 1 fixed = 78 total (was 79) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 31s 
{color} | {color:green} hadoop-yarn-applications-distributedshell in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 21s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817263/YARN-5354.01.patch |
| JIRA Issue | YARN-5354 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f5ae9b3de6c4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0fd3980 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12277/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12277/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestDistributedShell.checkTimelineV2() may fail for concurrent tests
> 
>
> Key: YARN-5354
> URL: https://issues.apache.org/jira/browse/YARN-5354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: 

[jira] [Commented] (YARN-5270) Solve miscellaneous issues caused by YARN-4844

2016-07-11 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371728#comment-15371728
 ] 

Jian He commented on YARN-5270:
---

lgtm, +1

> Solve miscellaneous issues caused by YARN-4844
> --
>
> Key: YARN-5270
> URL: https://issues.apache.org/jira/browse/YARN-5270
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Blocker
> Attachments: YARN-5270-branch-2.001.patch, 
> YARN-5270-branch-2.002.patch, YARN-5270-branch-2.003.patch, 
> YARN-5270-branch-2.004.patch, YARN-5270-branch-2.8.001.patch, 
> YARN-5270-branch-2.8.002.patch, YARN-5270-branch-2.8.003.patch, 
> YARN-5270-branch-2.8.004.patch, YARN-5270.003.patch, YARN-5270.004.patch
>
>
> Such as javac warnings reported by YARN-5077 and type converting issues in 
> Resources class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5359) FileSystemTimelineReader/Writer uses unix-specific default

2016-07-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371702#comment-15371702
 ] 

Sangjin Lee commented on YARN-5359:
---

I'll get to this after YARN-5354 and MAPREDUCE-6731 are committed.

> FileSystemTimelineReader/Writer uses unix-specific default
> --
>
> Key: YARN-5359
> URL: https://issues.apache.org/jira/browse/YARN-5359
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>
> {{FileSystemTimelineReaderImpl}} and {{FileSystemTimelineWriterImpl}} use a 
> unix-specific default. It won't work on Windows.
> Also, {{TestFileSystemTimelineReaderImpl}} uses this default directly, which 
> is also brittle against concurrent tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5359) FileSystemTimelineReader/Writer uses unix-specific default

2016-07-11 Thread Sangjin Lee (JIRA)
Sangjin Lee created YARN-5359:
-

 Summary: FileSystemTimelineReader/Writer uses unix-specific default
 Key: YARN-5359
 URL: https://issues.apache.org/jira/browse/YARN-5359
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.0.0-alpha1
Reporter: Sangjin Lee
Assignee: Sangjin Lee


{{FileSystemTimelineReaderImpl}} and {{FileSystemTimelineWriterImpl}} use a 
unix-specific default. It won't work on Windows.

Also, {{TestFileSystemTimelineReaderImpl}} uses this default directly, which is 
also brittle against concurrent tests.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5354) TestDistributedShell.checkTimelineV2() may fail for concurrent tests

2016-07-11 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5354:
--
Attachment: YARN-5354.01.patch

Posted patch v.1.

Switched to using a test-specific storage directory. I realize that the 
{{FileSystemTimelineWriterImpl}} default value should be handled separately. 
I'll file a separate JIRA.

> TestDistributedShell.checkTimelineV2() may fail for concurrent tests
> 
>
> Key: YARN-5354
> URL: https://issues.apache.org/jira/browse/YARN-5354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: YARN-5354.01.patch
>
>
> {{TestDistributedShell.checkTimelineV2()}} uses the default (hard-coded) 
> storage root directory. This is brittle against concurrent tests. We should 
> use a unique storage directory for the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5354) TestDistributedShell.checkTimelineV2() may fail for concurrent tests

2016-07-11 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5354:
--
Description: {{TestDistributedShell.checkTimelineV2()}} uses the default 
(hard-coded) storage root directory. This is brittle against concurrent tests. 
We should use a unique storage directory for the unit tests.  (was: 
{{TestDistributedShell.checkTimelineV2()}} uses the default (hard-coded) 
storage root directory. This is brittle against concurrent tests. We should use 
a unique storage directory for the unit tests.

We should also fix the default storage location for 
{{FileSystemTimelineWriterImpl}} to be cross-platform as part of this. The 
current value ( {{/tmp/timeline-service-data}} ) won't work on Windows.)

> TestDistributedShell.checkTimelineV2() may fail for concurrent tests
> 
>
> Key: YARN-5354
> URL: https://issues.apache.org/jira/browse/YARN-5354
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 3.0.0-alpha1
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>
> {{TestDistributedShell.checkTimelineV2()}} uses the default (hard-coded) 
> storage root directory. This is brittle against concurrent tests. We should 
> use a unique storage directory for the unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5353) ResourceManager can leak delegation tokens when they are shared across apps

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371680#comment-15371680
 ] 

Hadoop QA commented on YARN-5353:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 42s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokenAuthentication
 |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817239/YARN-5353.001.patch |
| JIRA Issue | YARN-5353 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8c770bee11a9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0fd3980 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12276/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12276/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12276/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12276/console |
| Powered by | Apache 

[jira] [Commented] (YARN-5080) Cannot obtain logs using YARN CLI -am for either KILLED or RUNNING AM

2016-07-11 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371632#comment-15371632
 ] 

Junping Du commented on YARN-5080:
--

Addendum patch looks good to me. +1. Will commit it tomorrow if no further 
comments.

> Cannot obtain logs using YARN CLI -am for either KILLED or RUNNING AM
> -
>
> Key: YARN-5080
> URL: https://issues.apache.org/jira/browse/YARN-5080
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Affects Versions: 2.8.0
>Reporter: Sumana Sathish
>Assignee: Xuan Gong
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: YARN-5080.1.patch, YARN-5080.2.patch, YARN-5080.3.patch, 
> YARN-5080.addendum.patch
>
>
> When the application is running, if we try to obtain AM logs using 
> {code}
> yarn logs -applicationId  -am 1
> {code}
> It throws the following error
> {code}
> Unable to get AM container informations for the application:
> Illegal character in scheme name at index 0: 0.0.0.0://
> {code} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5358) Consider adding a config setting to accept cluster name

2016-07-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C reassigned YARN-5358:


Assignee: Vrushali C

> Consider adding a config setting to accept cluster name
> ---
>
> Key: YARN-5358
> URL: https://issues.apache.org/jira/browse/YARN-5358
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>
> As part of the discussion in the context of Timeline Service and Federation 
> integration (YARN-5357), we may want to allow for configurable cluster name 
> (in addition to the physical cluster name which is part of cluster setting).
> Filing jira to think about this. 
> The logical (aka federated) cluster names should be restricted to a whitelist 
> perhaps so that it can't be totally upto to the user. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5358) Consider adding a config setting to accept cluster name

2016-07-11 Thread Vrushali C (JIRA)
Vrushali C created YARN-5358:


 Summary: Consider adding a config setting to accept cluster name
 Key: YARN-5358
 URL: https://issues.apache.org/jira/browse/YARN-5358
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Vrushali C



As part of the discussion in the context of Timeline Service and Federation 
integration (YARN-5357), we may want to allow for configurable cluster name (in 
addition to the physical cluster name which is part of cluster setting).

Filing jira to think about this. 

The logical (aka federated) cluster names should be restricted to a whitelist 
perhaps so that it can't be totally upto to the user. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5357) Timeline service v2 integration with Federation

2016-07-11 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5357:
-
Description: 
Jira to note the discussion points from an initial chat about integrating 
Timeline Service v2 with Federation (YARN-2915).

cc [~subru] [~curino] 

For Federation:
- all entities that belong to the same flow run should have the same cluster 
name
- app id in the same flow run strongly ordered in time
- need a logical cluster name and physical cluster name
- a possibility to implement the Application TimelineCollector as an 
interceptor in the AMRMProxyService.

For Timeline Service:
- need to store physical cluster id and logical cluster id so that we don't 
lose information at any level (flow/app/entity etc)
- add a  new table app id to cluster mapping table
- need a different entity table/some table to store node level metrics for 
physical cluster stats. Once we get to node-level rollup, we probably have to 
store something in a dc, cluster, rack, node hierarchy. In that case a physical 
cluster makes sense, but we'd still need some way to tie physical and logical 
together in order to make automatic error detection etc that we're envisioning 
feasible within a federated setup.


For the Cluster Naming convention:
- three situations for cluster name:
> app submitted to router should take federated (aka logical) cluster name
> app submitted directly to RM should take physical cluster name
> Info about the physical cluster  in entities?
- suggestion to set the cluster name as yarn tag at the router level (in the 
app submission context) 

Other points to note:
- for federation to work smoothly in environments that use HDFS some additional 
considerations are needed, and possibly some solution like what is being used 
at Twitter with the nFly approach.


Email thread context:

{code}

-- Forwarded message --
From: Joep Rottinghuis 
Date: Fri, Jul 8, 2016 at 1:22 PM
Subject: Re: Federation -Timeline Service meeting notes
To: Subramaniam Venkatraman Krishnan 
Cc: Sangjin Lee, Vrushali Channapattan , Carlo Curino


Thanks for the notes.

I think that for federation to work smoothly in environments that use HDFS some 
additional considerations are needed, and possibly some solution like what 
we're using at Twitter with our nFly approach.

bq. - need a different entity table/some table to store node level metrics for 
physical cluster stats
Once we get to node-level rollup, we probably have to store something in a dc, 
cluster, rack, node hierarchy. In that case a physical cluster makes sense, but 
we'd still need some way to tie physical and logical together in order to make 
automatic error detection etc that we're envisioning feasible within a 
federated setup.

Cheers,

Joep

On Fri, Jul 8, 2016 at 1:00 PM, Subramaniam Venkatraman Krishnan  wrote:

Thanks Vrushali for crisply capturing the essential from our rambling 
discussion J.

 

Sangjin, I just want to add one comment to yours – we want to retain the 
physical cluster name (possibly as a new entity type) so that we don’t lose 
information & we can cluster level rollups even if they are not efficient.

 

Additionally, based on the walkthrough of Federation design:

· There was general agreement with the proposed approach.

· There is a possibility to implement the Application 
TimelineCollector as an interceptor in the AMRMProxyService.

· Joep raised the concern that it would be better if the RMs obtain 
the epoch from FederationStateStore. This is not currently in the roadmap of 
our MVP but we definitely plan to address this in future.

 

Regards,

Subru

 

From: Sangjin Lee
Sent: Thursday, July 07, 2016 6:22 PM
To: Vrushali Channapattan 
Cc: Joep Rottinghuis; Carlo Curino; Subramaniam Venkatraman Krishnan 
Subject: Re: Federation -Timeline Service meeting notes

 

Thanks for the summary Vrushali!

 

Just so that we're on the same page regarding the terminology, I understand 
we're using the terms "logical cluster" and "federated cluster" interchangeably.

 

Also, between using the federated cluster name and the home cluster name as 
a solution, I think we were leaning towards the federated cluster name 
(although not concluded).

 

On Thu, Jul 7, 2016 at 4:33 PM, Vrushali Channapattan wrote:

 

For Federation:

- all entities that belong to the same flow run should have the same 
cluster name

- app id in the same flow run strongly ordered in time

- need a logical cluster name and physical cluster name

For Timeline Service:

- need to store physical cluster id and logical cluster id so that we 
don't lose information at any level (flow/app/entity etc)

- add a  new table app id to cluster mapping table

- need a 

[jira] [Created] (YARN-5357) Timeline service v2 integration with Federation

2016-07-11 Thread Vrushali C (JIRA)
Vrushali C created YARN-5357:


 Summary: Timeline service v2 integration with Federation 
 Key: YARN-5357
 URL: https://issues.apache.org/jira/browse/YARN-5357
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: timelineserver
Reporter: Vrushali C



Jira to note the discussion points from an initial chat about integrating 
Timeline Service v2 with Federation (YARN-2915).

cc [~subru] [~curino] 

For Federation:
- all entities that belong to the same flow run should have the same cluster 
name
- app id in the same flow run strongly ordered in time
- need a logical cluster name and physical cluster name
- a possibility to implement the Application TimelineCollector as an 
interceptor in the AMRMProxyService.

For Timeline Service:
- need to store physical cluster id and logical cluster id so that we don't 
lose information at any level (flow/app/entity etc)
- add a  new table app id to cluster mapping table
- need a different entity table/some table to store node level metrics for 
physical cluster stats. Once we get to node-level rollup, we probably have to 
store something in a dc, cluster, rack, node hierarchy. In that case a physical 
cluster makes sense, but we'd still need some way to tie physical and logical 
together in order to make automatic error detection etc that we're envisioning 
feasible within a federated setup.


For the Cluster Naming convention:
- three situations for cluster name:
> app submitted to router should take federated (aka logical) cluster name
> app submitted directly to RM should take physical cluster name
> Info about the physical cluster  in entities?
- suggestion to set the cluster name as yarn tag at the router level (in the 
app submission context) 

Other points to note:
- for federation to work smoothly in environments that use HDFS some additional 
considerations are needed, and possibly some solution like what is being used 
at Twitter with the nFly approach.


Email thread context:

{code}

-- Forwarded message --
From: Joep Rottinghuis 
Date: Fri, Jul 8, 2016 at 1:22 PM
Subject: Re: Federation -Timeline Service meeting notes
To: Subramaniam Venkatraman Krishnan 
Cc: Sangjin Lee , Vrushali Channapattan 
, Carlo Curino , Carlo Curino 
, "subru...@gmail.com" 


Thanks for the notes.

I think that for federation to work smoothly in environments that use HDFS some 
additional considerations are needed, and possibly some solution like what 
we're using at Twitter with our nFly approach.

bq. - need a different entity table/some table to store node level metrics for 
physical cluster stats
Once we get to node-level rollup, we probably have to store something in a dc, 
cluster, rack, node hierarchy. In that case a physical cluster makes sense, but 
we'd still need some way to tie physical and logical together in order to make 
automatic error detection etc that we're envisioning feasible within a 
federated setup.

Cheers,

Joep

On Fri, Jul 8, 2016 at 1:00 PM, Subramaniam Venkatraman Krishnan 
 wrote:

Thanks Vrushali for crisply capturing the essential from our rambling 
discussion J.

 

Sangjin, I just want to add one comment to yours – we want to retain the 
physical cluster name (possibly as a new entity type) so that we don’t lose 
information & we can cluster level rollups even if they are not efficient.

 

Additionally, based on the walkthrough of Federation design:

· There was general agreement with the proposed approach.

· There is a possibility to implement the Application 
TimelineCollector as an interceptor in the AMRMProxyService.

· Joep raised the concern that it would be better if the RMs obtain 
the epoch from FederationStateStore. This is not currently in the roadmap of 
our MVP but we definitely plan to address this in future.

 

Regards,

Subru

 

From: Sangjin Lee [mailto:sj...@twitter.com]
Sent: Thursday, July 07, 2016 6:22 PM
To: Vrushali Channapattan 
Cc: Joep Rottinghuis ; Carlo Curino 
; Carlo Curino ; Subramaniam 
Venkatraman Krishnan ; subru...@gmail.com
Subject: Re: Federation -Timeline Service meeting notes

 

Thanks for the summary Vrushali!

 

Just so that we're on the same page regarding the terminology, I understand 
we're using the terms "logical cluster" and "federated cluster" interchangeably.

 

Also, between using the federated cluster name and the home cluster name as 
a solution, I think we were leaning towards the federated cluster name 
(although not concluded).

 

On Thu, Jul 

[jira] [Updated] (YARN-5353) ResourceManager can leak delegation tokens when they are shared across apps

2016-07-11 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-5353:
-
Attachment: YARN-5353.001.patch

Seems to me that we need to make sure that the appTokens map always has the 
application removed when the application is marked as finished.  It's our one 
chance to clean up the app entry, and currently the code can conditionally 
decide to leave the app's entry in the map.

Attaching a patch that always removes the appTokens entry corresponding to an 
app when the app finished event is received.  Any tokens that are shared with 
other apps will continue to exist in the allTokens map, so I think we'll still 
be good as far as token-sharing goes.

> ResourceManager can leak delegation tokens when they are shared across apps
> ---
>
> Key: YARN-5353
> URL: https://issues.apache.org/jira/browse/YARN-5353
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.0, 2.6.1
>Reporter: Jason Lowe
>Priority: Critical
> Attachments: YARN-5353.001.patch
>
>
> Recently saw a ResourceManager go into heavy GC.  Heap dump showed that there 
> were millions of delegation tokens on the heap.  It looks like most of them 
> belonged to the appTokens map in DelegationTokenRenewer.  When an app 
> completes and tokens are removed for it, I noticed that the appTokens entry 
> for the app is not cleaned up if tokens were shared with other active apps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5280) Allow YARN containers to run with Java Security Manager

2016-07-11 Thread Greg Phillips (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371530#comment-15371530
 ] 

Greg Phillips commented on YARN-5280:
-

Hello [~lmccay] - Thanks for the link to the EE specification for application 
permission requests.  Given the range of frameworks that use YARN there is 
definitely utility in creating framework level rulesets.  In order to prevent 
users from granting themselves excess permissions this would likely need to 
take the form of server side configurations.  Thus far this effort has entailed 
providing all permissions to trusted code such as core hadoop libraries and 
surrounding projects (Pig, Hive, Oozie, etc.) while limiting privileges to the 
user contributed code that performs the processing.  I would be interested to 
see if we could adopt a similar model for Slider; full privileges for the core 
libraries while locking down the user code.  Initially I would like to prove 
this feature against MapReduce and the frameworks that leverage it.  
Additionally the solution must be extensible enough so other YARN frameworks 
can be handled differently by the NodeManager: either by disabling the security 
manager, or by providing a different set of permissions.

In secure installations of Hadoop the creation and management of keystores is 
already a necessity.  I have written some prototype utilities which streamline 
the process of signing Hadoop libraries.  For Pig and Hive the dynamically 
created jars will need to be broken out.  I have a test build of Pig which 
instead of creating an UberJar adds the necessary libs to tmpjars.  This allows 
the libraries to maintain their signatures, and ultimately decreases the 
overhead of running Pig jobs since the broken out libraries will now be able to 
exist in the filecache.  If this seems like an appropriate path I will create 
the subtasks for Hive and Pig.


> Allow YARN containers to run with Java Security Manager
> ---
>
> Key: YARN-5280
> URL: https://issues.apache.org/jira/browse/YARN-5280
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager, yarn
>Affects Versions: 2.6.4
>Reporter: Greg Phillips
>Priority: Minor
> Attachments: YARN-5280.patch, YARNContainerSandbox.pdf
>
>
> YARN applications have the ability to perform privileged actions which have 
> the potential to add instability into the cluster. The Java Security Manager 
> can be used to prevent users from running privileged actions while still 
> allowing their core data processing use cases. 
> Introduce a YARN flag which will allow a Hadoop administrator to enable the 
> Java Security Manager for user code, while still providing complete 
> permissions to core Hadoop libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5283) Refactor container assignment into AbstractYarnScheduler#assignContainers

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371482#comment-15371482
 ] 

Hadoop QA commented on YARN-5283:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 24s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
34s {color} | {color:green} root: The patch generated 0 new + 533 unchanged - 5 
fixed = 533 total (was 538) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 33m 35s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s 
{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 29s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12817216/YARN-5283.002.patch |
| JIRA Issue | YARN-5283 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 14d191d80ce2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0fd3980 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12275/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12275/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-tools/hadoop-sls U: . |
| Console output | 

[jira] [Commented] (YARN-5355) YARN Timeline Service v.2: alpha 2

2016-07-11 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371425#comment-15371425
 ] 

Sangjin Lee commented on YARN-5355:
---

I have just created and pushed branches {{YARN-5355}} and 
{{YARN-5355-branch-2}}. They are now open for this feature development.

> YARN Timeline Service v.2: alpha 2
> --
>
> Key: YARN-5355
> URL: https://issues.apache.org/jira/browse/YARN-5355
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: Timeline Service v2_ Ideas for Next Steps.pdf
>
>
> This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
> v.2.
> This is developed on feature branches: {{YARN-5355}} for the trunk-based 
> development and {{YARN-5355-branch-2}} to maintain backports to branch-2. Any 
> subtask work on this JIRA will be committed to those 2 branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5355) YARN Timeline Service v.2: alpha 2

2016-07-11 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5355:
--
Description: 
This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
v.2.

This is developed on feature branches: {{YARN-5355}} for the trunk-based 
development and {{YARN-5355-branch-2}} to maintain backports to branch-2. Any 
subtask work on this JIRA will be committed to those 2 branches.

  was:This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline 
Service v.2.


> YARN Timeline Service v.2: alpha 2
> --
>
> Key: YARN-5355
> URL: https://issues.apache.org/jira/browse/YARN-5355
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: Timeline Service v2_ Ideas for Next Steps.pdf
>
>
> This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
> v.2.
> This is developed on feature branches: {{YARN-5355}} for the trunk-based 
> development and {{YARN-5355-branch-2}} to maintain backports to branch-2. Any 
> subtask work on this JIRA will be committed to those 2 branches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5355) YARN Timeline Service v.2: alpha 2

2016-07-11 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated YARN-5355:
--
Attachment: Timeline Service v2_ Ideas for Next Steps.pdf

> YARN Timeline Service v.2: alpha 2
> --
>
> Key: YARN-5355
> URL: https://issues.apache.org/jira/browse/YARN-5355
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: timelineserver
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: Timeline Service v2_ Ideas for Next Steps.pdf
>
>
> This is an umbrella JIRA for the alpha 2 milestone for YARN Timeline Service 
> v.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5283) Refactor container assignment into AbstractYarnScheduler#assignContainers

2016-07-11 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-5283:
-
Attachment: YARN-5283.002.patch

Accidentally included another patch in the same tree.

> Refactor container assignment into AbstractYarnScheduler#assignContainers
> -
>
> Key: YARN-5283
> URL: https://issues.apache.org/jira/browse/YARN-5283
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager, 
> scheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5283.001.patch, YARN-5283.002.patch
>
>
> CapacityScheduler#allocateContainersToNode() and 
> FairScheduler#attemptScheduling() have some common code that can be 
> refactored into a common abstract method like 
> AbstractYarnScheduler#assignContainers().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5265) Make HBase configuration for the timeline service configurable

2016-07-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371331#comment-15371331
 ] 

Hadoop QA commented on YARN-5265:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 31m 56s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
33s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 36s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
7s {color} | {color:green} YARN-2928 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s 
{color} | {color:green} YARN-2928 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 3 
new + 208 unchanged - 0 fixed = 211 total (was 208) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 41s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 18s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 47s 
{color} | {color:green} hadoop-yarn-server-timelineservice in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 28s {color} 
| {color:red} hadoop-yarn-server-timelineservice-hbase-tests in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 8s 
{color} | {color:green} hadoop-yarn-site in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | 

[jira] [Commented] (YARN-5283) Refactor container assignment into AbstractYarnScheduler#assignContainers

2016-07-11 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371328#comment-15371328
 ] 

Ray Chiang commented on YARN-5283:
--

[~subru], sanity checks like {{isReadyToAssignContainers}} are bad for 
subclassing in the sense that you can't force the method to return without 
creating a subclass-only method that only 
{{AbstractYarnScheduler::assignContainers}} calls.  That's why I moved the 
{{isReadyToAssignContainers}} check outside {{assignContainers}} for all the 
subclasses.  That being said, if a 
{{assignContainers}}/{{assignContainersInternal}} approach is preferred, I'm 
okay with that.

There weren't any nice refactorings I could see within {{assignContainers}} at 
first glance, but I'll take a deeper look when I can.

> Refactor container assignment into AbstractYarnScheduler#assignContainers
> -
>
> Key: YARN-5283
> URL: https://issues.apache.org/jira/browse/YARN-5283
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, fairscheduler, resourcemanager, 
> scheduler
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: YARN-5283.001.patch
>
>
> CapacityScheduler#allocateContainersToNode() and 
> FairScheduler#attemptScheduling() have some common code that can be 
> refactored into a common abstract method like 
> AbstractYarnScheduler#assignContainers().



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4765) Split TestHBaseTimelineStorage into multiple test classes

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4765:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Split TestHBaseTimelineStorage into multiple test classes
> -
>
> Key: YARN-4765
> URL: https://issues.apache.org/jira/browse/YARN-4765
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4455) Support fetching metrics by time range

2016-07-11 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371311#comment-15371311
 ] 

Varun Saxena edited comment on YARN-4455 at 7/11/16 6:09 PM:
-

[~jrottinghuis]
This is to fetch metric values as a time series within a specific time range. 
For instance, if an application runs from 11:00 am to 11:45 am and user wants 
to see a specific metric as a time series from 11:20 am to 11:40 am only (maybe 
to debug some issue). This was the use case I had in mind.
We can use Scan#setColumnFamilyTimeRange which I think is available from HBase 
1.2 onwards. I have linked the relevant HBase JIRA here as well.


was (Author: varun_saxena):
[~jrottinghuis]
This is to fetch metrics in a time series within a specific time range. For 
instance, if an application runs from 11:00 am to 11:45 am and user wants to 
see a specific metric as a time series from 11:20 am to 11:40 am only (maybe to 
debug some issue). This was the use case I had in mind.
We can use Scan#setColumnFamilyTimeRange which I think is available from HBase 
1.2 onwards. I have linked the relevant HBase JIRA here as well.

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2016-07-11 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371311#comment-15371311
 ] 

Varun Saxena commented on YARN-4455:


[~jrottinghuis]
This is to fetch metrics in a time series within a specific time range. For 
instance, if an application runs from 11:00 am to 11:45 am and user wants to 
see a specific metric as a time series from 11:20 am to 11:40 am only (maybe to 
debug some issue). This was the use case I had in mind.
We can use Scan#setColumnFamilyTimeRange which I think is available from HBase 
1.2 onwards. I have linked the relevant HBase JIRA here as well.

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5265) Make HBase configuration for the timeline service configurable

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5265:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Make HBase configuration for the timeline service configurable
> --
>
> Key: YARN-5265
> URL: https://issues.apache.org/jira/browse/YARN-5265
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Joep Rottinghuis
>  Labels: YARN-5355
> Attachments: ATS v2 cluster deployment v1.png, 
> YARN-5265-YARN-2928.01.patch, YARN-5265-YARN-2928.02.patch, 
> YARN-5265-YARN-2928.03.patch, YARN-5265-YARN-2928.04.patch, 
> YARN-5265-YARN-2928.05.patch
>
>
> Currently we create "default" HBase configurations, this works as long as the 
> user places the appropriate configuration on the classpath.
> This works fine for a standalone Hadoop cluster.
> However, if a user wants to monitor an HBase cluster and has a separate ATS 
> HBase cluster, then it can become tricky to create the right classpath for 
> the nodemanagers and still have tasks have their separate configs.
> It will be much easier to add a yarn configuration to let cluster admins 
> configure which HBase to connect to to write ATS metrics to.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5094) some YARN container events have timestamp of -1 in REST output

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5094:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> some YARN container events have timestamp of -1 in REST output
> --
>
> Key: YARN-5094
> URL: https://issues.apache.org/jira/browse/YARN-5094
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Li Lu
>  Labels: YARN-5355
> Attachments: YARN-5094-YARN-2928.001.patch
>
>
> Some events in the YARN container entities have timestamp of -1. The 
> RM-generated container events have proper timestamps. It appears that it's 
> the NM-generated events that have -1: YARN_CONTAINER_CREATED, 
> YARN_CONTAINER_FINISHED, YARN_NM_CONTAINER_LOCALIZATION_FINISHED, 
> YARN_NM_CONTAINER_LOCALIZATION_STARTED.
> In the YARN container page,
> {noformat}
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: -1,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: -1,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: -1,
> info: { }
> }
> {noformat}
> I think the data itself is OK, but the values are not being populated in the 
> REST output?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5156) YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5156:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> YARN_CONTAINER_FINISHED of YARN_CONTAINERs will always have running state
> -
>
> Key: YARN-5156
> URL: https://issues.apache.org/jira/browse/YARN-5156
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Vrushali C
>  Labels: YARN-5355
> Attachments: YARN-5156-YARN-2928.01.patch
>
>
> On container finished, we're reporting "YARN_CONTAINER_STATE: "RUNNING"". Do 
> we design this deliberately or it's a bug? 
> {code}
> {
> metrics: [ ],
> events: [
> {
> id: "YARN_CONTAINER_FINISHED",
> timestamp: 1464213765890,
> info: {
> YARN_CONTAINER_EXIT_STATUS: 0,
> YARN_CONTAINER_STATE: "RUNNING",
> YARN_CONTAINER_DIAGNOSTICS_INFO: ""
> }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_FINISHED",
> timestamp: 1464213761133,
> info: { }
> },
> {
> id: "YARN_CONTAINER_CREATED",
> timestamp: 1464213761132,
> info: { }
> },
> {
> id: "YARN_NM_CONTAINER_LOCALIZATION_STARTED",
> timestamp: 1464213761132,
> info: { }
> }
> ],
> id: "container_e15_1464213707405_0001_01_18",
> type: "YARN_CONTAINER",
> createdtime: 1464213761132,
> info: {
> YARN_CONTAINER_ALLOCATED_PRIORITY: "20",
> YARN_CONTAINER_ALLOCATED_VCORE: 1,
> YARN_CONTAINER_ALLOCATED_HOST_HTTP_ADDRESS: "10.22.16.164:0",
> UID: 
> "yarn_cluster!application_1464213707405_0001!YARN_CONTAINER!container_e15_1464213707405_0001_01_18",
> YARN_CONTAINER_ALLOCATED_HOST: "10.22.16.164",
> YARN_CONTAINER_ALLOCATED_MEMORY: 1024,
> SYSTEM_INFO_PARENT_ENTITY: {
> type: "YARN_APPLICATION_ATTEMPT",
> id: "appattempt_1464213707405_0001_01"
> },
> YARN_CONTAINER_ALLOCATED_PORT: 64694
> },
> configs: { },
> isrelatedto: { },
> relatesto: { }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5260) Review / Recommendations for hbase writer code

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5260:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Review / Recommendations for hbase writer code
> --
>
> Key: YARN-5260
> URL: https://issues.apache.org/jira/browse/YARN-5260
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> [~ted_yu] is graciously reviewing the hbase writer related code and has some 
> recommendations. (more to come as review progresses). I will keep track of 
> those in this jira and perhaps spin off other jira(s) depending on the scope 
> of changes. 
> For FlowRunCoprocessor.java :
>  
> -  private HRegion region;
> Try to declare as Region - the interface. This way, you are to call methods 
> that are stable across future releases.
> -  private long getCellTimestamp(long timestamp, List tags) {
> tags is not used, remove the parameter.
> For FlowScanner:
> - private final InternalScanner flowRunScanner;
> Currently InternalScanner is Private. If you must use it, try surfacing your 
> case to hbase so that it can be marked:
> @InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC)
> @InterfaceStability.Evolving
> w.r.t. regionScanner :
> {code} 
> if (internalScanner instanceof RegionScanner) {
>   this.regionScanner = (RegionScanner) internalScanner;
> }
> {code}
> I see IllegalStateException being thrown in some methods when regionScanner 
> is null. Better bail out early in the ctor.
> {code}
>   public static AggregationOperation getAggregationOperationFromTagsList(
>   List tags) {
> for (AggregationOperation aggOp : AggregationOperation.values()) {
>   for (Tag tag : tags) {
> if (tag.getType() == aggOp.getTagType()) {
>   return aggOp;
> {code}
> The above nested loop can be improved (a lot):
> values() returns an array. If you pre-generate a Set 
> (https://docs.oracle.com/javase/7/docs/api/java/util/EnumSet.html) containing 
> all the values, the outer loop can be omitted.
> You iterate through tags and see if tag.getType() is in the Set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5229) Refactor #isApplicationEntity and #getApplicationEvent from HBaseTimelineWriterImpl

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5229:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Refactor #isApplicationEntity and #getApplicationEvent from 
> HBaseTimelineWriterImpl
> ---
>
> Key: YARN-5229
> URL: https://issues.apache.org/jira/browse/YARN-5229
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Joep Rottinghuis
>Assignee: Vrushali C
>Priority: Minor
>  Labels: YARN-5355
> Attachments: YARN-5229-YARN-2928.01.patch, 
> YARN-5229-YARN-2928.02.patch, YARN-5229-YARN-2928.03.patch, 
> YARN-5229-YARN-2928.04.patch
>
>
> As per [~gtCarrera9] commented in YARN-5170
> bq. In HBaseTimelineWriterImpl isApplicationEntity and getApplicationEvent 
> seem to be awkward. Looks more like something related to TimelineEntity or 
> ApplicationEntity
> In YARN-5170 we just made the method private, and in this separate jira we 
> can refactor these methods to TimelineEntity or ApplicationEntity.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5071) address HBase compatibility issues with trunk

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5071?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5071:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> address HBase compatibility issues with trunk
> -
>
> Key: YARN-5071
> URL: https://issues.apache.org/jira/browse/YARN-5071
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
>  Labels: YARN-5355
>
> The trunk is now adding or planning to add more and more 
> backward-incompatible changes. Some examples include
> - remove v.1 metrics classes (HADOOP-12504)
> - update jersey version (HADOOP-9613)
> - target java 8 by default (HADOOP-11858)
> This poses big challenges for the timeline service v.2 as we have a 
> dependency on hbase which depends on an older version of hadoop.
> We need to find a way to solve/contain/manage these risks before it is too 
> late.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2016-07-11 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371287#comment-15371287
 ] 

Joep Rottinghuis commented on YARN-4455:


Note quite clear what is mean by this jira. 
[~varun_saxena] perhaps you can elaborate a bit more what you had in mind for 
this (and if it isn't already covered in the current filters capabilities).

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4985) Refactor the coprocessor code & other definition classes into independent packages

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4985:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Refactor the coprocessor code & other definition classes into independent 
> packages
> --
>
> Key: YARN-4985
> URL: https://issues.apache.org/jira/browse/YARN-4985
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> As part of the coprocessor deployment, we have realized that it will be much 
> cleaner to have the coprocessor code sit in a package which does not depend 
> on hadoop-yarn-server classes. It only needs hbase and other util classes.
> These util classes and tag definition related classes can be refactored into 
> their own independent "definition" class package so that making changes to 
> coprocessor code, upgrading hbase, deploying hbase on a different hadoop 
> version cluster etc all becomes operationally much easier and less error 
> prone to having different library jars etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5011) Support metric filters for flow runs

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-5011:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Support metric filters for flow runs
> 
>
> Key: YARN-5011
> URL: https://issues.apache.org/jira/browse/YARN-5011
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: YARN-5011-YARN-2928.01.patch
>
>
> Support metric filters for flow runs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4786) Enhance hbase coprocessor aggregation operations:GLOBAL_MIN, LATEST_MIN etc and FINAL attributes

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4786:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Enhance hbase coprocessor aggregation operations:GLOBAL_MIN, LATEST_MIN etc 
> and FINAL attributes
> 
>
> Key: YARN-4786
> URL: https://issues.apache.org/jira/browse/YARN-4786
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> As part of YARN-4062, Joep and I had been discussing about min, max 
> operations and the final attributes. 
> YARN-4062 has GLOBAL_MIN, GLOBAL_MAX and SUM operations. It presently 
> indicates SUM_FINAL for a cell that contains a metric that is the final value 
> for the metric.
> We should enhance this such that the set of aggregation dimensions SUM, MIN, 
> MAX, etc. are really set of a per-column level and shouldn't be passed from 
> the client, but be instrumented by the ColumnHelper infrastructure instead. 
> We should probably use a different tag value for that.
> Both aggregation dimension and this "FINAL_VALUE" or whatever abbreviation we 
> use are needed to determine the right thing to do for compaction. Only one 
> value needs to have this final value bit / tag set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4821) Have a separate NM timeline publishing-interval

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4821:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Have a separate NM timeline publishing-interval
> ---
>
> Key: YARN-4821
> URL: https://issues.apache.org/jira/browse/YARN-4821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Naganarasimha G R
>  Labels: YARN-5355
> Attachments: YARN-4821-YARN-2928.v1.001.patch
>
>
> Currently the interval with which NM publishes container CPU and memory 
> metrics is tied to {{yarn.nodemanager.resource-monitor.interval-ms}} whose 
> default is 3 seconds. This is too aggressive.
> There should be a separate configuration that controls how often 
> {{NMTimelinePublisher}} publishes container metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4736) Issues with HBaseTimelineWriterImpl in single node hadoop & hbase cluster

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4736:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Issues with HBaseTimelineWriterImpl in single node hadoop & hbase cluster 
> --
>
> Key: YARN-4736
> URL: https://issues.apache.org/jira/browse/YARN-4736
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Naganarasimha G R
>Assignee: Vrushali C
>Priority: Critical
>  Labels: YARN-5355
> Attachments: NM_Hang_hbase1.0.3.tar.gz, hbaseException.log, 
> threaddump.log
>
>
> Faced some issues while running ATSv2 in single node Hadoop cluster and in 
> the same node had launched Hbase with embedded zookeeper.
> # Due to some NPE issues i was able to see NM was trying to shutdown, but the 
> NM daemon process was not completed due to the locks.
> # Got some exception related to Hbase after application finished execution 
> successfully. 
> will attach logs and the trace for the same



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4504) Retrospect on defaults for created time while querying

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4504:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Retrospect on defaults for created time while querying
> --
>
> Key: YARN-4504
> URL: https://issues.apache.org/jira/browse/YARN-4504
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4675) Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4675:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Reorganize TimeClientImpl into TimeClientV1Impl and TimeClientV2Impl
> 
>
> Key: YARN-4675
> URL: https://issues.apache.org/jira/browse/YARN-4675
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355
> Attachments: YARN-4675-YARN-2928.v1.001.patch
>
>
> We need to reorganize TimeClientImpl into TimeClientV1Impl ,  
> TimeClientV2Impl and if required a base class, so that its clear which part 
> of the code belongs to which version and thus better maintainable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4561) Compaction coprocessor enhancements: On/Off, whitelisting, blacklisting

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4561:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Compaction coprocessor enhancements: On/Off, whitelisting, blacklisting
> ---
>
> Key: YARN-4561
> URL: https://issues.apache.org/jira/browse/YARN-4561
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> YARN-4062 deals with the flush and compaction related coprocessor basic 
> functionality. We also need to ensure we can turn compaction on/off as a 
> whole (in case of dealing with production issues) as well as provide a way to 
> allow for blacklisting and whitelisting of processing compaction for certain 
> records.
> For instance, we may want to compact only those records which belong to 
> applications in that datacenter. This way we donot interfere with hbase 
> replication causing coprocessors to process the same record in more than one 
> dc at the same time.
> Also, we might want to not compact/process certain records, perhaps whose 
> rowkey matches a certain criteria.
> Filing jira to track these enhancements



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4489) Limit flow runs returned while querying flows

2016-07-11 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371295#comment-15371295
 ] 

Joep Rottinghuis commented on YARN-4489:


Don't we already have this in alpha-1?

> Limit flow runs returned while querying flows
> -
>
> Key: YARN-4489
> URL: https://issues.apache.org/jira/browse/YARN-4489
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4455) Support fetching metrics by time range

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4455:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4343) Need to support Application History Server on ATSV2

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4343:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Need to support Application History Server on ATSV2
> ---
>
> Key: YARN-4343
> URL: https://issues.apache.org/jira/browse/YARN-4343
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>Assignee: Naganarasimha G R
>  Labels: YARN-5355
>
> AHS is used by the CLI and Webproxy(REST), if the application related 
> information is not found in RM then it tries to fetch from AHS and show



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4368) Support Multiple versions of the timeline service at the same time

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4368:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Support Multiple versions of the timeline service at the same time
> --
>
> Key: YARN-4368
> URL: https://issues.apache.org/jira/browse/YARN-4368
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Naganarasimha G R
>  Labels: YARN-5355
>
> During rolling updgrade it will be helpfull to have the older version of the 
> timeline server to be also running so that the existing apps can submit to 
> the older version of ATS .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4489) Limit flow runs returned while querying flows

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4489:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Limit flow runs returned while querying flows
> -
>
> Key: YARN-4489
> URL: https://issues.apache.org/jira/browse/YARN-4489
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4239) Flow page for Web UI

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4239:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Flow page for Web UI
> 
>
> Key: YARN-4239
> URL: https://issues.apache.org/jira/browse/YARN-4239
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: App-selected-sunburst.png, 
> Flowrun-page-metrics-link-to-sunburst.png, Sunburst-unselected.png, 
> Task-selected-sunburst.png, YARN-4239.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4261) fix the order of timelinereader in yarn/yarn.cmd

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4261:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> fix the order of timelinereader in yarn/yarn.cmd
> 
>
> Key: YARN-4261
> URL: https://issues.apache.org/jira/browse/YARN-4261
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Varun Saxena
>Priority: Trivial
>  Labels: YARN-5355
>
> The order of the timelinereader command is not correct in yarn/yarn.cmd.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5356) ResourceUtilization should also include resource availability

2016-07-11 Thread Nathan Roberts (JIRA)
Nathan Roberts created YARN-5356:


 Summary: ResourceUtilization should also include resource 
availability
 Key: YARN-5356
 URL: https://issues.apache.org/jira/browse/YARN-5356
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager, resourcemanager
Affects Versions: 3.0.0-alpha1
Reporter: Nathan Roberts


Currently ResourceUtilization contains absolute quantities of resource used 
(e.g. 4096MB memory used). It would be good if it also included how much of 
that resource is actually available on the node so that the RM can use this 
data to schedule more effectively (overcommit, etc)

Currently the only available information is the Resource the node registered 
with (or later updated using updateNodeResource). However, these aren't really 
sufficient to get a good view of how utilized a resource is. For example, if a 
node reports 400% CPU utilization, does that mean it's completely full, or 
barely utilized? Today there is no reliable way to figure this out.

[~elgoiri] - Lots of good work is happening in YARN-2965 so curious if you have 
thoughts/opinions on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4173) Ensure the final values for metrics/events are emitted/stored at APP completion time

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4173:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Ensure the final values for metrics/events are emitted/stored at APP 
> completion time
> 
>
> Key: YARN-4173
> URL: https://issues.apache.org/jira/browse/YARN-4173
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> When an application is finishing, the final values of metrics/events need to 
> be written to the backend as final values from the all AM/RM/NM processes for 
> that app.
> For the flow run table (YARN-3901), we need to know which values are the 
> final ones for metrics so that they can be tagged accordingly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4220) [Storage implementation] Support getEntities with only Application id but no flow and flow run ID

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4220:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> [Storage implementation] Support getEntities with only Application id but no 
> flow and flow run ID
> -
>
> Key: YARN-4220
> URL: https://issues.apache.org/jira/browse/YARN-4220
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>Priority: Minor
>  Labels: YARN-5355
>
> Currently we're enforcing flow and flowrun id to be non-null values on 
> {{getEntities}}. We can actually query the appToFlow table to figure out an 
> application's flow id and flowrun id if they're missing. This will simplify 
> normal queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4116) refactor ColumnHelper read* methods

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4116:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> refactor ColumnHelper read* methods
> ---
>
> Key: YARN-4116
> URL: https://issues.apache.org/jira/browse/YARN-4116
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>  Labels: YARN-5355
>
> Currently we have several ColumnHelper.read* methods that are slightly 
> different in terms of the initial conditions and behave different 
> accordingly. We may want to refactor them so that the code reuse is strong 
> and also the API stays reasonable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4097:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: YARN-5355
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4061) [Fault tolerance] Fault tolerant writer for timeline v2

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4061:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> [Fault tolerance] Fault tolerant writer for timeline v2
> ---
>
> Key: YARN-4061
> URL: https://issues.apache.org/jira/browse/YARN-4061
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: YARN-5355
> Attachments: FaulttolerantwriterforTimelinev2.pdf
>
>
> We need to build a timeline writer that can be resistant to backend storage 
> down time and timeline collector failures. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3981) support timeline clients not associated with an application

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3981:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> support timeline clients not associated with an application
> ---
>
> Key: YARN-3981
> URL: https://issues.apache.org/jira/browse/YARN-3981
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Zhijie Shen
>  Labels: YARN-5355
>
> In the current v.2 design, all timeline writes must belong in a 
> flow/application context (cluster + user + flow + flow run + application).
> But there are use cases that require writing data outside the context of an 
> application. One such example is a higher level client (e.g. tez client or 
> hive/oozie/cascading client) writing flow-level data that spans multiple 
> applications. We need to find a way to support them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4069) For long running apps (> 2 days), populate flow activity table

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-4069:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> For long running apps (> 2 days), populate flow activity table
> --
>
> Key: YARN-4069
> URL: https://issues.apache.org/jira/browse/YARN-4069
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
>  Labels: YARN-5355
>
> YARN-4063 / YARN-3901 contain the work being done to populate the flow 
> activity and flow run tables.
> The flow activity table is updated each time a yarn application is created 
> and finishes. So if an application runs for more than 3 days, day1 has an 
> entry for the flow for start time, day3 has an entry for the flow for end 
> time but day2 has no entry for that flow. 
> Filing the jira to ensure that for long running apps, the flow activity table 
> does get a snapshot time entered for each day that an application is running 
> in that flow.
> It may be the case that for ALL apps (long running or not) the same update 
> may be done in the flow activity table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3914) Entity created time should be part of the row key of entity table

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3914:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Entity created time should be part of the row key of entity table
> -
>
> Key: YARN-3914
> URL: https://issues.apache.org/jira/browse/YARN-3914
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
>  Labels: YARN-5355
>
> Entity created time should be part of the row key of entity table, between 
> entity type and entity Id. The reason to have it is to index the entities. 
> Though we cannot index the entities for all kinds of information, indexing 
> them according to the created time is very necessary. Without it, every query 
> for the latest entities that belong to an application and a type will scan 
> through all the entities that belong to them. For example, if we want to list 
> the 100 latest started containers in an YARN app.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3907) create the flow-version table

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3907:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> create the flow-version table
> -
>
> Key: YARN-3907
> URL: https://issues.apache.org/jira/browse/YARN-3907
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>  Labels: YARN-5355
>
> Per discussions on YARN-3815, create the flow-version table that maps flow 
> versions with various data about the versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3880) Writing more RM side app-level metrics

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3880:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Writing more RM side app-level metrics
> --
>
> Key: YARN-3880
> URL: https://issues.apache.org/jira/browse/YARN-3880
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Zhijie Shen
>Assignee: Naganarasimha G R
>  Labels: YARN-5355
>
> In YARN-3044, we implemented an analog of metrics publisher for ATS v1. While 
> it helps to write app/attempt/container life cycle events, it really doesn't 
> write  as many app-level system metrics that RM are now having.  Just list 
> the metrics that I found missing:
> * runningContainers
> * memorySeconds
> * vcoreSeconds
> * preemptedResourceMB
> * preemptedResourceVCores
> * numNonAMContainerPreempted
> * numAMContainerPreempted
> Please feel fee to add more into the list if you find it's not covered.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3881) Writing RM cluster-level metrics

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3881:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Writing RM cluster-level metrics
> 
>
> Key: YARN-3881
> URL: https://issues.apache.org/jira/browse/YARN-3881
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
>  Labels: YARN-5355
> Attachments: metrics.json
>
>
> RM has a bunch of metrics that we may want to write into the timeline backend 
> to. I attached the metrics.json that I've crawled via 
> {{http://localhost:8088/jmx?qry=Hadoop:*}}. IMHO, we need to pay attention to 
> three groups of metrics:
> 1. QueueMetrics
> 2. JvmMetrics
> 3. ClusterMetrics
> The problem is that unlike other metrics belongs to a single application, 
> these ones belongs to RM or cluster-wide. Therefore, current write path is 
> not going to work for these metrics because they don't have the associated 
> user/flow/app context info. We need to rethink of modeling cross-app metrics 
> and the api to handle them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3895) Support ACLs in TimelineReader

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3895?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3895:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Support ACLs in TimelineReader
> --
>
> Key: YARN-3895
> URL: https://issues.apache.org/jira/browse/YARN-3895
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3872) TimelineReader Web UI Implementation

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3872:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> TimelineReader Web UI Implementation
> 
>
> Key: YARN-3872
> URL: https://issues.apache.org/jira/browse/YARN-3872
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3865) Backward compatibility of reader with ATSv1

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3865:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Backward compatibility of reader with ATSv1
> ---
>
> Key: YARN-3865
> URL: https://issues.apache.org/jira/browse/YARN-3865
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3874) Optimize and synchronize FS Reader and Writer Implementations

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3874:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Optimize and synchronize FS Reader and Writer Implementations
> -
>
> Key: YARN-3874
> URL: https://issues.apache.org/jira/browse/YARN-3874
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: YARN-2928
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355
> Attachments: YARN-3874-YARN-2928.01.patch, 
> YARN-3874-YARN-2928.02.patch, YARN-3874-YARN-2928.03.patch
>
>
> Combine FS Reader and Writer Implementations and make them consistent with 
> each other.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3879) [Storage implementation] Create HDFS backing storage implementation for ATS reads

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3879:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> [Storage implementation] Create HDFS backing storage implementation for ATS 
> reads
> -
>
> Key: YARN-3879
> URL: https://issues.apache.org/jira/browse/YARN-3879
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: YARN-5355
>
> Reader version of YARN-3841



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2016-07-11 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371276#comment-15371276
 ] 

Joep Rottinghuis commented on YARN-3841:


[~ozawa] we have closed the work on the alpha-1 branch (which [~sjlee0] merged 
to trunk over the weekend).
We'll continue our work on YARN-5355. I've moved this jira over.
Patches for this jira will be named YARN-3841-YARN-5355.02.patch

> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: YARN-5355
> Attachments: YARN-3841.001.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3841) [Storage implementation] Adding retry semantics to HDFS backing storage

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3841:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> [Storage implementation] Adding retry semantics to HDFS backing storage
> ---
>
> Key: YARN-3841
> URL: https://issues.apache.org/jira/browse/YARN-3841
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>  Labels: YARN-5355
> Attachments: YARN-3841.001.patch
>
>
> HDFS backing storage is useful for following scenarios.
> 1. For Hadoop clusters which don't run HBase.
> 2. For fallback from HBase when HBase cluster is temporary unavailable. 
> Quoting ATS design document of YARN-2928:
> {quote}
> In the case the HBase
> storage is not available, the plugin should buffer the writes temporarily 
> (e.g. HDFS), and flush
> them once the storage comes back online. Reading and writing to hdfs as the 
> the backup storage
> could potentially use the HDFS writer plugin unless the complexity of 
> generalizing the HDFS
> writer plugin for this purpose exceeds the benefits of reusing it here.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3822) Scalability validation of RM writing app/attempt/container lifecycle events

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3822:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Scalability validation of RM writing app/attempt/container lifecycle events
> ---
>
> Key: YARN-3822
> URL: https://issues.apache.org/jira/browse/YARN-3822
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, timelineserver
>Reporter: Zhijie Shen
>Assignee: Naganarasimha G R
>  Labels: YARN-5355
>
> We need to test how scalable RM metrics publisher is



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3818) [Aggregation] Queue-level Aggregation on Application States table

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3818:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> [Aggregation] Queue-level Aggregation on Application States table
> -
>
> Key: YARN-3818
> URL: https://issues.apache.org/jira/browse/YARN-3818
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Junping Du
>Assignee: Junping Du
>  Labels: YARN-5355
>
> Queue level aggregation represents summary info of a specific queue, it 
> should include summary info of accumulated and statistic means on 
> applications that belongs to a queue (logically or physically).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3817) [Aggregation] Flow and User level aggregation on Application States table

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3817:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> [Aggregation] Flow and User level aggregation on Application States table
> -
>
> Key: YARN-3817
> URL: https://issues.apache.org/jira/browse/YARN-3817
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Junping Du
>Assignee: Li Lu
>  Labels: YARN-5355
> Attachments: Detail Design for Flow and User Level Aggregation.pdf, 
> YARN-3817-poc-v1-rebase.patch, YARN-3817-poc-v1.patch
>
>
> We need time-based flow/user level aggregation to present flow/user related 
> states to end users.
> Flow level represents summary info of a specific flow. User level aggregation 
> represents summary info of a specific user, it should include summary info of 
> accumulated and statistic means (by two levels: application and flow), like: 
> number of Flows, applications, resource consumption, resource means per app 
> or flow, etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3741) consider nulling member maps/sets of TimelineEntity

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3741?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3741:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> consider nulling member maps/sets of TimelineEntity
> ---
>
> Key: YARN-3741
> URL: https://issues.apache.org/jira/browse/YARN-3741
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>  Labels: YARN-5355
>
> Currently there are multiple collection members of TimelineEntity that are 
> always instantiated, regardless of whether they are used or not: info, 
> configs, metrics, events, isRelatedToEntities, and relatesToEntities.
> Since TimelineEntities will be created very often and in lots of cases many 
> of these members will be empty, creating these empty collections is wasteful 
> in terms of garbage collector pressure.
> It would be good to start out with null members, and instantiate these 
> collections only if they are actually used. Of course, we need to make that 
> contract very clear and refactor all client code to handle that scenario.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3815) [Aggregation] Application/Flow/User/Queue Level Aggregations

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3815:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> [Aggregation] Application/Flow/User/Queue Level Aggregations
> 
>
> Key: YARN-3815
> URL: https://issues.apache.org/jira/browse/YARN-3815
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
>  Labels: YARN-5355
> Attachments: Timeline Service Nextgen Flow, User, Queue Level 
> Aggregations (v1).pdf, aggregation-design-discussion.pdf, 
> hbase-schema-proposal-for-aggregation.pdf
>
>
> Per previous discussions in some design documents for YARN-2928, the basic 
> scenario is the query for stats can happen on:
> - Application level, expect return: an application with aggregated stats
> - Flow level, expect return: aggregated stats for a flow_run, flow_version 
> and flow 
> - User level, expect return: aggregated stats for applications submitted by 
> user
> - Queue level, expect return: aggregated stats for applications within the 
> Queue
> Application states is the basic building block for all other level 
> aggregations. We can provide Flow/User/Queue level aggregated statistics info 
> based on application states (a dedicated table for application states is 
> needed which is missing from previous design documents like HBase/Phoenix 
> schema design). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3650) Consider concurrency situations for TimelineWriter

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3650:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Consider concurrency situations for TimelineWriter
> --
>
> Key: YARN-3650
> URL: https://issues.apache.org/jira/browse/YARN-3650
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>  Labels: YARN-5355
>
> [~jrottinghuis] brought up an interesting point in YARN-3411. Filing jira to 
> track to discuss and handle the following: 
> For TimelineWriter and its implementations, is there an expectation set around
> concurrency? Is any synchronization expected / needed to ensure visibility 
> when calls happen from different threads?
> How about entities, are they expected to be immutable once passed to the 
> write method?
> Similarly for the constructor, we're assuming that the configuration object 
> will not be modified while we're constructing a TimelineWriter?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3649:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Allow configurable prefix for hbase table names (like prod, exp, test etc)
> --
>
> Key: YARN-3649
> URL: https://issues.apache.org/jira/browse/YARN-3649
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
> Attachments: YARN-3649-YARN-2928.01.patch
>
>
> As per [~jrottinghuis]'s suggestion in YARN-3411, it will be a good idea to 
> have a configurable prefix for hbase table names.  
> This way we can easily run a staging, a test, a production and whatever setup 
> in the same HBase instance / without having to override every single table in 
> the config.
> One could simply overwrite the default prefix and you're off and running.
> For prefix, potential candidates are "tst" "prod" "exp" etc. Once can then 
> still override one tablename if needed, but managing one whole setup will be 
> easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3649) Allow configurable prefix for hbase table names (like prod, exp, test etc)

2016-07-11 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15371245#comment-15371245
 ] 

Joep Rottinghuis commented on YARN-3649:


Does this need to be rebased?
Should we also update the documentation as well?

> Allow configurable prefix for hbase table names (like prod, exp, test etc)
> --
>
> Key: YARN-3649
> URL: https://issues.apache.org/jira/browse/YARN-3649
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
> Attachments: YARN-3649-YARN-2928.01.patch
>
>
> As per [~jrottinghuis]'s suggestion in YARN-3411, it will be a good idea to 
> have a configurable prefix for hbase table names.  
> This way we can easily run a staging, a test, a production and whatever setup 
> in the same HBase instance / without having to override every single table in 
> the config.
> One could simply overwrite the default prefix and you're off and running.
> For prefix, potential candidates are "tst" "prod" "exp" etc. Once can then 
> still override one tablename if needed, but managing one whole setup will be 
> easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3622) Enable application client to communicate with new timeline service

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3622:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Enable application client to communicate with new timeline service
> --
>
> Key: YARN-3622
> URL: https://issues.apache.org/jira/browse/YARN-3622
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
>  Labels: YARN-5355
>
> YARN application has client and AM. We have the story to make TimelineClient 
> work inside AM for v2, but not for client. TimelineClient inside app client 
> needs to be taken care of too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3616) determine how to generate YARN container events

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3616:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> determine how to generate YARN container events
> ---
>
> Key: YARN-3616
> URL: https://issues.apache.org/jira/browse/YARN-3616
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-2928
>Reporter: Sangjin Lee
>Assignee: Naganarasimha G R
>  Labels: YARN-5355
>
> The initial design called for the node manager to write YARN container events 
> to take advantage of the distributed writes. RM acting as a sole writer of 
> all YARN container events would have significant scalability problems.
> Still, there are some types of events that are not captured by the NM. The 
> current implementation has both: RM writing container events and NM writing 
> container events.
> We need to sort this out, and decide how we can write all needed container 
> events in a scalable manner.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3588) Timeline entity uniqueness

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3588:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Timeline entity uniqueness
> --
>
> Key: YARN-3588
> URL: https://issues.apache.org/jira/browse/YARN-3588
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
>  Labels: YARN-5355
>
> In YARN-3051, we have some discussion about how to uniquely identify an 
> entity. Sangjin and some other folks propose to only uniquely identify an 
> entity by  in the scope of a single app. This is different from 
> entity uniqueness in ATSv1, where  can globally identify an entity. 
> This is going to affect the way of fetching a single entity, and raise the 
> compatibility issue. Let's continue our discussion here to unblock YARN-3051.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3545) Investigate the concurrency issue with the map of timeline collector

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3545:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> Investigate the concurrency issue with the map of timeline collector
> 
>
> Key: YARN-3545
> URL: https://issues.apache.org/jira/browse/YARN-3545
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Zhijie Shen
>Assignee: Li Lu
>  Labels: YARN-5355
> Attachments: YARN-3545-YARN-2928.000.patch
>
>
> See the discussion in YARN-3390 for details. Let's continue the discussion 
> here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3509) CollectorNodemanagerProtocol's authorization doesn't work

2016-07-11 Thread Joep Rottinghuis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joep Rottinghuis updated YARN-3509:
---
Parent Issue: YARN-5355  (was: YARN-2928)

> CollectorNodemanagerProtocol's authorization doesn't work
> -
>
> Key: YARN-3509
> URL: https://issues.apache.org/jira/browse/YARN-3509
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, security, timelineserver
>Reporter: Zhijie Shen
>Assignee: Zhijie Shen
>  Labels: YARN-5355
> Attachments: YARN-3509.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   3   >