[jira] [Commented] (YARN-2255) YARN Audit logging not added to log4j.properties

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15650057#comment-15650057
 ] 

Hadoop QA commented on YARN-2255:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m  
2s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
0s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
17s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-2255 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838128/YARN-2255.patch |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux 7e294d29c7cb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ed0beba |
| shellcheck | v0.4.4 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13838/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-yarn-project/hadoop-yarn U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13838/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> YARN Audit logging not added to log4j.properties
> 
>
> Key: YARN-2255
> URL: https://issues.apache.org/jira/browse/YARN-2255
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Varun Saxena
>Assignee: Ying Zhang
> Attachments: YARN-2255.patch
>
>
> log4j.properties file which is part of the hadoop package, doesnt have YARN 
> Audit logging tied to it. This leads to audit logs getting generated in 
> normal log files. Audit logs should be generated in a separate log file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2255) YARN Audit logging not added to log4j.properties

2016-11-08 Thread Ying Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ying Zhang updated YARN-2255:
-
Attachment: YARN-2255.patch

> YARN Audit logging not added to log4j.properties
> 
>
> Key: YARN-2255
> URL: https://issues.apache.org/jira/browse/YARN-2255
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Varun Saxena
>Assignee: Ying Zhang
> Attachments: YARN-2255.patch
>
>
> log4j.properties file which is part of the hadoop package, doesnt have YARN 
> Audit logging tied to it. This leads to audit logs getting generated in 
> normal log files. Audit logs should be generated in a separate log file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2255) YARN Audit logging not added to log4j.properties

2016-11-08 Thread Ying Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649977#comment-15649977
 ] 

Ying Zhang commented on YARN-2255:
--

Uploaded a patch. With this patch, set the following property to enable audit 
logging for ResourceManager/NodeManager:
  export YARN_RESOURCEMANAGER_OPTS="$YARN_RESOURCEMANAGER_OPTS  
-Drm.audit.logger=INFO,RMAUDIT"
  export YARN_NODEMANAGER_OPTS="$YARN_NODEMANAGER_OPTS 
-Dnm.audit.logger=INFO,NMAUDIT"
The audit log will be written to rm-audit.log/nm-audit.log under yarn log 
directory by default.
The audit logging for RM and NM is off by default.

[~varun_saxena], would you please help to review it? Thanks.

> YARN Audit logging not added to log4j.properties
> 
>
> Key: YARN-2255
> URL: https://issues.apache.org/jira/browse/YARN-2255
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Varun Saxena
>Assignee: Ying Zhang
>
> log4j.properties file which is part of the hadoop package, doesnt have YARN 
> Audit logging tied to it. This leads to audit logs getting generated in 
> normal log files. Audit logs should be generated in a separate log file



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-11-08 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649880#comment-15649880
 ] 

Bibin A Chundatt edited comment on YARN-5849 at 11/9/16 6:15 AM:
-

{quote}
1. The patch applies to the scenario, when enable mount is false. The group is 
created in other cases. The current implementation throws an exception if any 
controller does not have the group created and writable. What is your more 
specific concern?
{quote}
IIUC even if the mount patch exists, its not required to create 
{{yarnHierarchy}} for all {{CGroupController}} now we can individually 
configure in {{ResourceHandlerModule}} which resource chain to use. For example 
as per current implementation even if yarn is not going to use memory monitor 
will create {{yarnHierarchy}} folder .Which is not required to be created.


was (Author: bibinchundatt):
{quote}
1. The patch applies to the scenario, when enable mount is false. The group is 
created in other cases. The current implementation throws an exception if any 
controller does not have the group created and writable. What is your more 
specific concern?
{quote}
IIUC even if the mount patch exists, its not required to create 
{{yarnHierarchy}} for all {{CGroupController}} now we can individually 
configure in {{ResourceHandlerModule}} which resource chain to use. For example 
as per current implementation even if yarn is not going to use memory monitor 
will create {{yarnHierarchy}} folder . rt??

> Automatically create YARN control group for pre-mounted cgroups
> ---
>
> Key: YARN-5849
> URL: https://issues.apache.org/jira/browse/YARN-5849
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5849.000.patch, YARN-5849.001.patch
>
>
> Yarn can be launched with linux-container-executor.cgroups.mount set to 
> false. It will search for the cgroup mount paths set up by the administrator 
> parsing the /etc/mtab file. You can also specify 
> resource.percentage-physical-cpu-limit to limit the CPU resources assigned to 
> containers.
> linux-container-executor.cgroups.hierarchy is the root of the settings of all 
> YARN containers. If this is specified but not created YARN will fail at 
> startup:
> Caused by: java.io.FileNotFoundException: 
> /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied)
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263)
> This JIRA is about automatically creating YARN control group in the case 
> above. It reduces the cost of administration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-11-08 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649880#comment-15649880
 ] 

Bibin A Chundatt commented on YARN-5849:


{quote}
1. The patch applies to the scenario, when enable mount is false. The group is 
created in other cases. The current implementation throws an exception if any 
controller does not have the group created and writable. What is your more 
specific concern?
{quote}
IIUC even if the mount patch exists, its not required to create 
{{yarnHierarchy}} for all {{CGroupController}} now we can individually 
configure in {{ResourceHandlerModule}} which resource chain to use. For example 
as per current implementation even if yarn is not going to use memory monitor 
will create {{yarnHierarchy}} folder . rt??

> Automatically create YARN control group for pre-mounted cgroups
> ---
>
> Key: YARN-5849
> URL: https://issues.apache.org/jira/browse/YARN-5849
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5849.000.patch, YARN-5849.001.patch
>
>
> Yarn can be launched with linux-container-executor.cgroups.mount set to 
> false. It will search for the cgroup mount paths set up by the administrator 
> parsing the /etc/mtab file. You can also specify 
> resource.percentage-physical-cpu-limit to limit the CPU resources assigned to 
> containers.
> linux-container-executor.cgroups.hierarchy is the root of the settings of all 
> YARN containers. If this is specified but not created YARN will fail at 
> startup:
> Caused by: java.io.FileNotFoundException: 
> /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied)
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263)
> This JIRA is about automatically creating YARN control group in the case 
> above. It reduces the cost of administration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5139) [Umbrella] Move YARN scheduler towards global scheduler

2016-11-08 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649829#comment-15649829
 ] 

Carlo Curino commented on YARN-5139:


Makes sense... I might be traveling in 3-4 weeks, but we can continue asynch 
discussions.

> [Umbrella] Move YARN scheduler towards global scheduler
> ---
>
> Key: YARN-5139
> URL: https://issues.apache.org/jira/browse/YARN-5139
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: Explanantions of Global Scheduling (YARN-5139) 
> Implementation.pdf, YARN-5139-Concurrent-scheduling-performance-report.pdf, 
> YARN-5139-Global-Schedulingd-esign-and-implementation-notes-v2.pdf, 
> YARN-5139-Global-Schedulingd-esign-and-implementation-notes.pdf, 
> YARN-5139.000.patch, wip-1.YARN-5139.patch, wip-2.YARN-5139.patch, 
> wip-3.YARN-5139.patch, wip-4.YARN-5139.patch, wip-5.YARN-5139.patch
>
>
> Existing YARN scheduler is based on node heartbeat. This can lead to 
> sub-optimal decisions because scheduler can only look at one node at the time 
> when scheduling resources.
> Pseudo code of existing scheduling logic looks like:
> {code}
> for node in allNodes:
>Go to parentQueue
>   Go to leafQueue
> for application in leafQueue.applications:
>for resource-request in application.resource-requests
>   try to schedule on node
> {code}
> Considering future complex resource placement requirements, such as node 
> constraints (give me "a && b || c") or anti-affinity (do not allocate HBase 
> regionsevers and Storm workers on the same host), we may need to consider 
> moving YARN scheduler towards global scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-11-08 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649772#comment-15649772
 ] 

Rohith Sharma K S commented on YARN-5375:
-

Overall patch looks good to me..  I will do testing on random test failures 
applying this patch. And also I believe many places MockRM#waitFor are 
sleeping, this need to be re looked how sleep time can be reduced.

> invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures
> --
>
> Key: YARN-5375
> URL: https://issues.apache.org/jira/browse/YARN-5375
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Reporter: sandflee
>Assignee: sandflee
>  Labels: oct16-medium
> Attachments: YARN-5375.01.patch, YARN-5375.03.patch, 
> YARN-5375.04.patch, YARN-5375.05.patch, YARN-5375.06.patch, 
> YARN-5375.07-drain-statestore.patch, YARN-5375.07-sync-statestore.patch, 
> YARN-5375.08.patch, YARN-5375.09.patch
>
>
> seen many test failures related to RMApp/RMAppattempt comes to some state but 
> some event are not processed in rm event queue or scheduler event queue, 
> cause test failure, seems we could implicitly invokes drainEvents(should also 
> drain sheduler event) in some mockRM method like waitForState



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5836) NMToken passwd not checked in ContainerManagerImpl, malicious AM can fake the Token and kill containers of other apps at will

2016-11-08 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649726#comment-15649726
 ] 

Botong Huang commented on YARN-5836:


Good point, thanks Jason for the info. 

> NMToken passwd not checked in ContainerManagerImpl, malicious AM can fake the 
> Token and kill containers of other apps at will
> -
>
> Key: YARN-5836
> URL: https://issues.apache.org/jira/browse/YARN-5836
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> When AM calls NM via stopContainers() in ContainerManagementProtocol, the 
> NMToken (generated by RM) is passed along via the user ugi. However currently 
> ContainerManagerImpl is not validating this token correctly, specifically in 
> authorizeGetAndStopContainerRequest() in ContainerManagerImpl. Basically it 
> blindly trusts the content in the NMTokenIdentifier without verifying the 
> password (RM generated signature) in the NMToken, so that malicious AM can 
> just fake the content in the NMTokenIdentifier and pass it to NMs. Moreover, 
> currently even for plain text checking, when the appId doesn’t match, all it 
> does is log it as a warning and continues to kill the container…
> For startContainers the NMToken is not checked correctly in authorizeUser() 
> as well, however the ContainerToken is verified properly by regenerating and 
> comparing the password in verifyAndGetContainerTokenIdentifier(), so that 
> malicious AM cannot launch containers at will. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5819) Verify fairshare and minshare preemption

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649685#comment-15649685
 ] 

Hadoop QA commented on YARN-5819:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
55s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
47s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
50s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 30 unchanged - 0 fixed = 36 total (was 30) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.TestSchedulingWithAllocationRequestId
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5819 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837913/yarn-5819.YARN-4752.2.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8b455096f1a0 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-4752 / 9568f41 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13837/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13837/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13837/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-08 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649663#comment-15649663
 ] 

Sunil G commented on YARN-5611:
---

Thanks [~rohithsharma]. I think RMContainer is mostly kept as read-only. I have 
not see any comments or history specifically for RMApp. You could help to 
confirm the same.

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-08 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649568#comment-15649568
 ] 

Rohith Sharma K S commented on YARN-5611:
-

bq. YarnConfiguration.DEFAULT_RM_APPLICATION_LIFETIME_MONITOR_INTERVAL_MS 
variable name can also be updated.
make sense to me.

bq. Timeline server update during update of timeout also we might have to handle
Timeout is internal to rmapp. I think adding in ATS is NOT required. I think 
can we  move this discussion to separate thread?

bq Application timeout is set to 0 and then start monitoring is it possbile can 
we add a testcase
currently, if timeout is 0, update API will fail.


> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5783) Verify identification of starved applications

2016-11-08 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5783:
---
Summary: Verify identification of starved applications  (was: Verify 
applications are identified starved)

> Verify identification of starved applications
> -
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch, yarn-5783.YARN-4752.5.patch, 
> yarn-5783.YARN-4752.6.patch, yarn-5783.YARN-4752.7.patch, 
> yarn-5783.YARN-4752.8.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5783) Verify applications are identified starved

2016-11-08 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649546#comment-15649546
 ] 

Karthik Kambatla commented on YARN-5783:


[~templedf] - thanks for the careful reviews and final +1. Let me go ahead and 
check this in now, so I can trigger the build for YARN-5819 and get it ready 
for your review.

[~leftnoteasy] - filed YARN-5863 to track the addition of starved state. 

Checking this in.. 

> Verify applications are identified starved
> --
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch, yarn-5783.YARN-4752.5.patch, 
> yarn-5783.YARN-4752.6.patch, yarn-5783.YARN-4752.7.patch, 
> yarn-5783.YARN-4752.8.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-08 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649540#comment-15649540
 ] 

Rohith Sharma K S commented on YARN-5611:
-

[~sunilg]
bq 1. ApplicationClientProtocol#updateApplicationTimeouts .Could this be an 
Evolving api?.
the API is already UnStable, I think should be fine. cc:/[~jianhe]

bq. 2. ApplicationClientProtocolPBClientImpl#updateApplicationTimeouts. Does 
exception handling block needs return? RPCUtil method will throw exception, 
correct?
bq. 3. In ApplicationClientProtocolPBServiceImpl#updateApplicationTimeouts, we 
use catch (YarnException| IOException e).
frankly I just copy pasted from other API implementation from PBImpl. It looks 
to be all API are doing the same way. If really want to clean, we can take 
separate task to do refactoring PBClientImpl classes. Thoughts? 

bq. 5. Given a writeLock in RMAppImpl#updateApplicationTimeout, why do we need 
another lock in RMAppManager#updateApplicationTimeout. Is this to handle some 
race conditions while app update event is waiting in StateStore dispatcher 
queue? I would love to have some more comments in these synchronized blocks or 
write locks to give a brief explanation. It will help us later
Good question!! Jian also had same doubt. Let me brief aobut it. Anything 
holding writeLock in RMAppImpl blocks the main AsyncDiapatcher. This is costly 
operation. For updateTimeout, we need to wait for transaction to complete which 
can not be done holding RMAppImpl lock. So, in-memory updations happens from 
RMAppImpl and trigger an event to StateStore and release the writeLock. Wait 
for statestore update call from RMAppManager in separate lock.

bq. 4. On a different note, i think COMPLETED_APP_STATES could be defined by 
RMAppImpl itself and expose a read-only api. This can help to cleanup local 
states definitions. could be done in another patch.
I would prefer to do in separate JIRA. May be can you raise it?

bq. 6. RMApp is generally considered as read-only. updateApplicationTimeout 
will violate that. we can place this api in RMAppImpl itself, and in client 
side, we could convert to RMAppImpl object and use. ProportionPolicy, New 
Global Scheduler etc are using this way.
I would prefer to add in RMApp itself rather than type casting in RMAppManager. 
It is internal interface used for both read and write. 

bq. 7. Timeout is to be part of ApplicationReport correct? Is that a part of 
this patch?
this will be done in YARN-4206

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5862) TestDiskFailures.testLocalDirsFailures failed

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649539#comment-15649539
 ] 

Hadoop QA commented on YARN-5862:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} 
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
12s{color} | {color:red} 
patch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file 
(hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/target/findbugsXml.xml)
 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 25s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
|   | hadoop.yarn.server.TestContainerManagerSecurity |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5862 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838102/YARN-5862.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 64742009f8e4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 62d8c17 |
| Default Java | 1.8.0_101 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/13836/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/13836/artifact/patchprocess/patch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
| unit | 

[jira] [Created] (YARN-5863) Record that an application is starved for resources when it is

2016-11-08 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5863:
--

 Summary: Record that an application is starved for resources when 
it is
 Key: YARN-5863
 URL: https://issues.apache.org/jira/browse/YARN-5863
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler, scheduler preemption
Affects Versions: 2.8.0
Reporter: Karthik Kambatla


On YARN-5783, Wangda suggested we add a new scheduler state called STARVED to 
identify applications that are starved for resources. This state can be 
scheduler-agnostic and can be used for preemption etc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-08 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649497#comment-15649497
 ] 

Bibin A Chundatt commented on YARN-5611:


Thank you [~rohithsharma] for patch

Few Minor comments

# {{YarnConfiguration.DEFAULT_RM_APPLICATION_LIFETIME_MONITOR_INTERVAL_MS}} 
variable name can also be updated.
# Since multiple application timeout we might support in future can move to 
separate {{AppTimeoutDignostics}}
{code}
79  String diagnostics =
80  "Application killed due to exceeding its lifetime period";
{code}
# Timeline server update during update of timeout also we might have to handle 
{quote}
UpdateApplicationTimeoutsRequest takes appId and Map to update timeout values. And these timeout values are gets added to 
existing timeouts out. If application was not monitored, then this application 
start monitoring from now.
{quote}
# Application timeout is set to 0 and then start monitoring is it possbile can 
we add a testcase

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5862) TestDiskFailures.testLocalDirsFailures failed

2016-11-08 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5862:
---
Attachment: YARN-5862.001.patch

> TestDiskFailures.testLocalDirsFailures failed
> -
>
> Key: YARN-5862
> URL: https://issues.apache.org/jira/browse/YARN-5862
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5862.001.patch
>
>
> {code}
> java.util.NoSuchElementException: null 
> at 
> java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
>  
> at 
> java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:247)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:179)
>  
> at 
> org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99)
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5862) TestDiskFailures.testLocalDirsFailures failed

2016-11-08 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-5862:
--

 Summary: TestDiskFailures.testLocalDirsFailures failed
 Key: YARN-5862
 URL: https://issues.apache.org/jira/browse/YARN-5862
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Yufei Gu
Assignee: Yufei Gu


{code}
java.util.NoSuchElementException: null 
at 
java.util.concurrent.ConcurrentHashMap$HashIterator.nextEntry(ConcurrentHashMap.java:1354)
 
at 
java.util.concurrent.ConcurrentHashMap$ValueIterator.next(ConcurrentHashMap.java:1384)
 
at 
org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:247)
 
at 
org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:179)
 
at 
org.apache.hadoop.yarn.server.TestDiskFailures.testLocalDirsFailures(TestDiskFailures.java:99)
 
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649427#comment-15649427
 ] 

Hadoop QA commented on YARN-5823:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
49s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 41 unchanged - 2 fixed = 42 total (was 43) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
23s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 
47s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 31m 48s{color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}141m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.client.api.impl.TestAMRMProxy |
|   | hadoop.yarn.client.api.impl.TestOpportunisticContainerAllocation |
| Timed out junit tests | 
org.apache.hadoop.yarn.client.api.impl.TestDistributedScheduling |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5823 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838064/YARN-5823.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux af333aea3d7b 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-5634) Simplify initialization/use of RouterPolicy via a RouterPolicyFacade

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649423#comment-15649423
 ] 

Hadoop QA commented on YARN-5634:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
30s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
28s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
45s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
3s{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 3 new + 221 unchanged - 0 fixed = 224 total (was 221) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 35s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5634 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838089/YARN-5634-YARN-2915.02.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7b14a772c968 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / c3a5672 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13835/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13835/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
|  Test Results | 

[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649411#comment-15649411
 ] 

Mingliang Liu commented on YARN-5833:
-

Thanks for the prompt action. The {{branch-2}} is great again.

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.addendum.patch, YARN-5833.003.patch, 
> YARN-5883.001.patch, YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649396#comment-15649396
 ] 

Subru Krishnan commented on YARN-5833:
--

I just compiled branch-2 locally with the addendum using Java7 successfully and 
committed it. Thanks [~kkaranasos] for the quick turnaround!

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.addendum.patch, YARN-5833.003.patch, 
> YARN-5883.001.patch, YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5783) Verify applications are identified starved

2016-11-08 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649382#comment-15649382
 ] 

Karthik Kambatla commented on YARN-5783:


bq. Overall, I think "starved" should be a scheduler-agnostic scheduling state 
of an app.

[~leftnoteasy] - that is an interesting thought. In the current patch for 
FairScheduler preemption, strictly speaking we are not essentially marking the 
application as starved. Marking them so makes it easier to handle any changes 
to the starvation much more gracefully. Given it is related, but not necessary, 
I would like to follow up on that separately. We should also think carefully 
about adding scheduler-only states for applications. 

[~templedf] - mind taking another look? 

> Verify applications are identified starved
> --
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch, yarn-5783.YARN-4752.5.patch, 
> yarn-5783.YARN-4752.6.patch, yarn-5783.YARN-4752.7.patch, 
> yarn-5783.YARN-4752.8.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5833:
-
Attachment: YARN-5833.003.addendum.patch

Thanks for catching this, [~liuml07].
We had compiled it with Java 8. 
Attaching addendum patch that fixes the problem.

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.addendum.patch, YARN-5833.003.patch, 
> YARN-5883.001.patch, YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649375#comment-15649375
 ] 

Subru Krishnan commented on YARN-5833:
--

Thanks [~liuml07] for catching this. The issue is because we used 
{{putIfAbsent}}. I did compile branch-2 locally before pushing but 
unfortunately I only have java8 installed, my bad.

[~kkaranasos] is providing an addendum with a fix which I'll commit to branch-2 
shortly.

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.addendum.patch, YARN-5833.003.patch, 
> YARN-5883.001.patch, YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5783) Verify applications are identified starved

2016-11-08 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649334#comment-15649334
 ] 

Daniel Templeton commented on YARN-5783:


As with almost everything in FS and CS, the concepts exist in both and ideally 
could be abstracted into common code, but the devil is in the details.  Sounds 
like good future work.

+1 for the latest patch.  I'll commit tomorrow morning.

> Verify applications are identified starved
> --
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch, yarn-5783.YARN-4752.5.patch, 
> yarn-5783.YARN-4752.6.patch, yarn-5783.YARN-4752.7.patch, 
> yarn-5783.YARN-4752.8.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649313#comment-15649313
 ] 

Hadoop QA commented on YARN-4597:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-4597 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-4597 |
| GITHUB PR | https://github.com/apache/hadoop/pull/143 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13834/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch, YARN-4597.008.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5634) Simplify initialization/use of RouterPolicy via a RouterPolicyFacade

2016-11-08 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649307#comment-15649307
 ] 

Carlo Curino commented on YARN-5634:


The current patch is rebased after YARN-5391 got in. I also extended the test 
coverage (spotted and fixed a potential, though not likely, NPE in 
FederationStateStoreFacade) and added a PriorityBroadcastPolicyManager to 
experiment with simple and different router policies.

> Simplify initialization/use of RouterPolicy via a RouterPolicyFacade 
> -
>
> Key: YARN-5634
> URL: https://issues.apache.org/jira/browse/YARN-5634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>  Labels: oct16-medium
> Attachments: YARN-5634-YARN-2915.01.patch, 
> YARN-5634-YARN-2915.02.patch
>
>
> The current set of policies require some machinery to (re)initialize based on 
> changes in the SubClusterPolicyConfiguration. This JIRA tracks the effort to 
> hide much of that behind a simple RouterPolicyFacade, making lifecycle and 
> usage of the policies easier to consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-11-08 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649299#comment-15649299
 ] 

Daniel Templeton commented on YARN-5600:


New comments:

* Maybe define a constant for {{new Date(Long.MAX_VALUE)}} is make it a little 
more obvious.
* In {{DeletionService.deleteWithDelay()}}, if the delay is -1, you're doing a 
bunch of work for no reason.
* The javadoc description for the {{scheduleFileDeletionTask()}} methods should 
end with a period.
* Is throwing the {{IllegalArgumentException}} the right thing to do? That 
information is not going to make it back to the end user who set the bogus 
value.  I didn't verify what happens when the dispatcher gets the exception, 
but it may take the NM down.
* Here: {code}  debugDelayDefault = conf.getInt(
  YarnConfiguration.DEBUG_NM_DELETE_DELAY_SEC, 0);{code} it would 
be better to make the 0 a constant.
* In {{ResourceLocalizationService.handleDestroyApplicationResources()}} you 
have {code}debugDelaySec =
(int)(((applicationCleanupTime.getTime() - now.getTime()) +
999) / 1000);{code}  I'd rather see you divide by 1000 and add 
1.  It's a little less clever/more obvious.
* The javadoc description for {{ApplicationImpl.delayedDeletionTime}} should 
end with a period.
* {{estimateRetention()}} should maybe be {{calculateRetention()}} or 
{{updateRetention()}} or {{recalculateRetention()}}.  There's no estimation 
happening.
* {{TestDeletionService.testCustomDisableDelete()}} has a spurious "remain" in 
the javadoc
* In {{TestDeletionService.testCustomDisableDelete()}}, I'm concerned about the 
change from a 20 sec wait to a 1 sec wait.  Are you sure that in all cases, 
even on slow ancient upstream infrastructure, that 1 sec is enough?  Can you 
sensibly expose some metrics from the {{DeletionService}} so you can tell when 
a delete is ignored, rather than waiting around to see if the file is still 
there?
* You have a {{println()}} in {{testCustomRetentionPolicy()}}.
* I'm still worried about flakiness with the sleeps at the end of 
{{testCustomRetentionPolicy()}}.  Can you sensibly expose some metrics from the 
{{DeletionService}} so you can be more assertive about what happened, rather 
than relying on timing across threads?
* You have a {{println()}} in {{testAPIError()}}.
* {{testCustomRetentionPolicy()}} and {{testAPIError()}} are almost identical.  
Maybe add a parameter and reuse the code?
* In {{TestContainerManager}}, {{verifyContainerDir()}} and {{verifyAppDir()}} 
have a lot of common code.  Maybe pull it out into another method?
* In {{waitForApplicationDirDeleted()}} you might be better off with monotonic 
time.


> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch, YARN-5600.003.patch, YARN-5600.004.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5634) Simplify initialization/use of RouterPolicy via a RouterPolicyFacade

2016-11-08 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-5634:
---
Attachment: YARN-5634-YARN-2915.02.patch

> Simplify initialization/use of RouterPolicy via a RouterPolicyFacade 
> -
>
> Key: YARN-5634
> URL: https://issues.apache.org/jira/browse/YARN-5634
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>  Labels: oct16-medium
> Attachments: YARN-5634-YARN-2915.01.patch, 
> YARN-5634-YARN-2915.02.patch
>
>
> The current set of policies require some machinery to (re)initialize based on 
> changes in the SubClusterPolicyConfiguration. This JIRA tracks the effort to 
> hide much of that behind a simple RouterPolicyFacade, making lifecycle and 
> usage of the policies easier to consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649285#comment-15649285
 ] 

Mingliang Liu commented on YARN-5833:
-

{code}
[ERROR] COMPILATION ERROR :
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java:[755,31]
 cannot find symbol
  symbol:   method putIfAbsent(java.lang.String,java.lang.String)
  location: variable filteredValues of type 
java.util.Map
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-common: Compilation failure
[ERROR] 
/Users/mliu/Workspace/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java:[755,31]
 cannot find symbol
[ERROR] symbol:   method putIfAbsent(java.lang.String,java.lang.String)
[ERROR] location: variable filteredValues of type 
java.util.Map
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-common
{code}
Java 7 on {{branch-2}}. Thanks,

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-08 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-4597:
--
Attachment: YARN-4597.008.patch

Updated patch.

bq. btw. agree with this, in case you would like to change it.
Done..

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch, YARN-4597.008.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649277#comment-15649277
 ] 

Konstantinos Karanasos commented on YARN-5833:
--

[~liuml07], I have checked it on trunk.. What error are you getting on branch-2?

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5851) TestContainerManagerSecurity testContainerManager[1] failed

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649273#comment-15649273
 ] 

Hadoop QA commented on YARN-5851:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 11s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestDiskFailures |
|   | hadoop.yarn.server.TestContainerManagerSecurity |
|   | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5851 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838080/yarn5851.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux be15c81edbe4 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 2a65eb1 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13833/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13833/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13833/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> TestContainerManagerSecurity testContainerManager[1] failed 
> 
>
> Key: YARN-5851
>   

[jira] [Commented] (YARN-5139) [Umbrella] Move YARN scheduler towards global scheduler

2016-11-08 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649253#comment-15649253
 ] 

Wangda Tan commented on YARN-5139:
--

[~curino],

Good point, yeah I think we should have a better structured logic to support 
different levels of "fairness". They may come from different places:
1) How/when to sort queues / apps, we can re-sort queues/apps for each 
allocated containers, or we can delay the re-sorting 
2) Which I mentioned above: maximum number of pending to-be-committed resource 
allocations.
3) Lower level fairness such as user-limit, etc.

So basically, instead of putting them into one place (such as "pluggable 
fairness policy"), we may need to have a couple of configurable places to 
update the scheduler to be more fairness or less fairness.

Since I will take vacation soon, I think we could have some discussions 3-4 
weeks later.

> [Umbrella] Move YARN scheduler towards global scheduler
> ---
>
> Key: YARN-5139
> URL: https://issues.apache.org/jira/browse/YARN-5139
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: Explanantions of Global Scheduling (YARN-5139) 
> Implementation.pdf, YARN-5139-Concurrent-scheduling-performance-report.pdf, 
> YARN-5139-Global-Schedulingd-esign-and-implementation-notes-v2.pdf, 
> YARN-5139-Global-Schedulingd-esign-and-implementation-notes.pdf, 
> YARN-5139.000.patch, wip-1.YARN-5139.patch, wip-2.YARN-5139.patch, 
> wip-3.YARN-5139.patch, wip-4.YARN-5139.patch, wip-5.YARN-5139.patch
>
>
> Existing YARN scheduler is based on node heartbeat. This can lead to 
> sub-optimal decisions because scheduler can only look at one node at the time 
> when scheduling resources.
> Pseudo code of existing scheduling logic looks like:
> {code}
> for node in allNodes:
>Go to parentQueue
>   Go to leafQueue
> for application in leafQueue.applications:
>for resource-request in application.resource-requests
>   try to schedule on node
> {code}
> Considering future complex resource placement requirements, such as node 
> constraints (give me "a && b || c") or anti-affinity (do not allocate HBase 
> regionsevers and Storm workers on the same host), we may need to consider 
> moving YARN scheduler towards global scheduling.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-08 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649249#comment-15649249
 ] 

Sangjin Lee commented on YARN-5739:
---

I'd like to add to Varun's and Vrushali's comments.

{{EntityTypeReader}} extends {{GenericEntityReader}}, but its sole purpose is 
to list the entity types. I understand the rationale for 
{{GenericEntityReader}} (to inherit a number of utility features), but it feels 
a little awkward. At least, can we override the unnecessary public methods 
(e.g. {{getEntity()}} and {{getEntities()}}) to throw an 
{{UnsupportedOperationException}}?

I also agree that we probably want to use something like {{entitytypes}} for 
the REST endpoint.

For a nit on a few javadoc comments, the beginning verb needs to be in the 
third-person. For example, {{TimelineReaderManager.java}} l.183, it should be 
"Gets", not "Get".

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5585) [Atsv2] Reader side changes for entity prefix and support for pagination via additional filters

2016-11-08 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649247#comment-15649247
 ] 

Li Lu commented on YARN-5585:
-

Finished my round of review. Other than my previous, just one nit:
In TimelineReaderWebServices:
The 0Ls showed up several times. Shall we overload the method to add default 
values? Or, why are we converting 0s into strings and then parse them back?

> [Atsv2] Reader side changes for entity prefix and support for pagination via 
> additional filters
> ---
>
> Key: YARN-5585
> URL: https://issues.apache.org/jira/browse/YARN-5585
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>Priority: Critical
>  Labels: oct16-hard
> Attachments: 0001-YARN-5585.patch, YARN-5585-YARN-5355.0001.patch, 
> YARN-5585-workaround.patch, YARN-5585.v0.patch
>
>
> TimelineReader REST API's provides lot of filters to retrieve the 
> applications. Along with those, it would be good to add new filter i.e fromId 
> so that entities can be retrieved after the fromId. 
> Current Behavior : Default limit is set to 100. If there are 1000 entities 
> then REST call gives first/last 100 entities. How to retrieve next set of 100 
> entities i.e 101 to 200 OR 900 to 801?
> Example : If applications are stored database, app-1 app-2 ... app-10.
> *getApps?limit=5* gives app-1 to app-5. But to retrieve next 5 apps, there is 
> no way to achieve this. 
> So proposal is to have fromId in the filter like 
> *getApps?limit=5&=app-5* which gives list of apps from app-6 to 
> app-10. 
> Since ATS is targeting large number of entities storage, it is very common 
> use case to get next set of entities using fromId rather than querying all 
> the entites. This is very useful for pagination in web UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-08 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649236#comment-15649236
 ] 

Li Lu commented on YARN-5739:
-

Thanks [~vrushalic]! I'll add a KeyOnlyFilter in the filter list. Got confused 
by the name First"KeyOnly"Filter. It actually returns a kv pair...

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-08 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649229#comment-15649229
 ] 

Jian He commented on YARN-4597:
---

I see, sounds reasonable. 
bq. I feel a better name for 'shouldLaunchContainer' should have been 
'containerAlreadyLaunched'.
btw. agree with this, in case you would like to change it.

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649223#comment-15649223
 ] 

Mingliang Liu commented on YARN-5833:
-

Is {{branch-2}} failing with Java 7 because of this? Thanks,

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5851) TestContainerManagerSecurity testContainerManager[1] failed

2016-11-08 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5851:
-
Attachment: yarn5851.001.patch

MiniKDC has not been set up when 
UserGroupInformation.setConfiguration(conf);
 is called in the constructor of TestContainerManagerSecurity. The fix delays 
it into set up method

> TestContainerManagerSecurity testContainerManager[1] failed 
> 
>
> Key: YARN-5851
> URL: https://issues.apache.org/jira/browse/YARN-5851
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: security, unittest
> Attachments: yarn5851.001.patch
>
>
> ---
> Test set: org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> ---
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 21.727 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 0.005 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Can't get Kerberos realm
>   at sun.security.krb5.Config.getDefaultRealm(Config.java:1029)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:88)
>   at 
> org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:291)
>   at 
> org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:337)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.(TestContainerManagerSecurity.java:151)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5851) TestContainerManagerSecurity testContainerManager[1] failed

2016-11-08 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned YARN-5851:


Assignee: Haibo Chen

> TestContainerManagerSecurity testContainerManager[1] failed 
> 
>
> Key: YARN-5851
> URL: https://issues.apache.org/jira/browse/YARN-5851
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>  Labels: security, unittest
>
> ---
> Test set: org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> ---
> Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 21.727 sec 
> <<< FAILURE! - in org.apache.hadoop.yarn.server.TestContainerManagerSecurity
> testContainerManager[1](org.apache.hadoop.yarn.server.TestContainerManagerSecurity)
>   Time elapsed: 0.005 sec  <<< ERROR!
> java.lang.IllegalArgumentException: Can't get Kerberos realm
>   at sun.security.krb5.Config.getDefaultRealm(Config.java:1029)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:88)
>   at 
> org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:291)
>   at 
> org.apache.hadoop.security.UserGroupInformation.setConfiguration(UserGroupInformation.java:337)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.(TestContainerManagerSecurity.java:151)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-08 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649168#comment-15649168
 ] 

Arun Suresh commented on YARN-4597:
---

Hmmm.. I agree it looks equivalent. But what if the 'isMarkedForKill' flag is 
set on the container object by the scheduler in between the time the 
ContainerLaunch thread is running the 'ContainerLaunch::launchContainer()' 
method and the 'ContainerLaunch::handleContainerExitCode()'. In that case, the 
CONTAINER_KILLED_ON_REQUEST event will not be sent.


> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-08 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649165#comment-15649165
 ] 

Vrushali C edited comment on YARN-5739 at 11/8/16 11:21 PM:


Thanks [~gtCarrera9] for the patch. I wish to add to [~varun_saxena]'s review 
suggestions.

- agree with suggestion to rename the rest endpoint, but I think we should not 
use "-" in the rest endpoint string. So perhaps something like {noformat} 
/apps/{appid}/entitytypes {noformat}
- Yes we need not set max versions at L159 in EntityTypeReader
- WRT caching, I am wondering, if there might be a query coming next for the 
details of these entity types?

For the scan/filter, I have a different suggestion:

Looks like we want to return only entity types. Entity types are part of row 
keys, we don't need the column qualifiers and values in that case. So we can 
consider using the KeyOnlyFilter filter.   This is a filter that will only 
return the key component of each KV (the value will be rewritten as empty). 
This filter can be used to grab all of the keys without having to also grab the 
values.  When performing a table scan where only the row keys are needed (no 
families, qualifiers, values or timestamps), to use this, add a FilterList with 
a MUST_PASS_ALL operator to the scanner using setFilter. The filter list should 
include both a FirstKeyOnlyFilter and a KeyOnlyFilter. 
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html






was (Author: vrushalic):
Thanks [~gtCarrera9] for the patch. I wish to add to [~varun_saxena]'s review 
suggestions.

- agree with suggestion to rename the rest endpoint, but I think we should not 
use "-" in the rest endpoint string. So perhaps something like {preformat} 
/apps/{appid}/entitytypes {preformat}
- Yes we need not set max versions at L159 in EntityTypeReader
- WRT caching, I am wondering, if there might be a query coming next for the 
details of these entity types?

For the scan/filter, I have a different suggestion:

Looks like we want to return only entity types. Entity types are part of row 
keys, we don't need the column qualifiers and values in that case. So we can 
consider using the KeyOnlyFilter filter.   This is a filter that will only 
return the key component of each KV (the value will be rewritten as empty). 
This filter can be used to grab all of the keys without having to also grab the 
values.  When performing a table scan where only the row keys are needed (no 
families, qualifiers, values or timestamps), to use this, add a FilterList with 
a MUST_PASS_ALL operator to the scanner using setFilter. The filter list should 
include both a FirstKeyOnlyFilter and a KeyOnlyFilter. 
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html





> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-08 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649165#comment-15649165
 ] 

Vrushali C edited comment on YARN-5739 at 11/8/16 11:21 PM:


Thanks [~gtCarrera9] for the patch. I wish to add to [~varun_saxena]'s review 
suggestions.

- agree with suggestion to rename the rest endpoint, but I think we should not 
use "-" in the rest endpoint string. So perhaps something like {preformat} 
/apps/{appid}/entitytypes {preformat}
- Yes we need not set max versions at L159 in EntityTypeReader
- WRT caching, I am wondering, if there might be a query coming next for the 
details of these entity types?

For the scan/filter, I have a different suggestion:

Looks like we want to return only entity types. Entity types are part of row 
keys, we don't need the column qualifiers and values in that case. So we can 
consider using the KeyOnlyFilter filter.   This is a filter that will only 
return the key component of each KV (the value will be rewritten as empty). 
This filter can be used to grab all of the keys without having to also grab the 
values.  When performing a table scan where only the row keys are needed (no 
families, qualifiers, values or timestamps), to use this, add a FilterList with 
a MUST_PASS_ALL operator to the scanner using setFilter. The filter list should 
include both a FirstKeyOnlyFilter and a KeyOnlyFilter. 
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html






was (Author: vrushalic):
Thanks [~gtCarrera9] for the patch. I wish to add to [~varun_saxena]'s review 
suggestions.

- agree with suggestion to rename the rest endpoint, but I think we should not 
use "-" in the rest endpoint string. So perhaps something like {quote} 
/apps/{appid}/entitytypes {quote}
- Yes we need not set max versions at L159 in EntityTypeReader
- WRT caching, I am wondering, if there might be a query coming next for the 
details of these entity types?

For the scan/filter, I have a different suggestion:

Looks like we want to return only entity types. Entity types are part of row 
keys, we don't need the column qualifiers and values in that case. So we can 
consider using the KeyOnlyFilter filter.   This is a filter that will only 
return the key component of each KV (the value will be rewritten as empty). 
This filter can be used to grab all of the keys without having to also grab the 
values.  When performing a table scan where only the row keys are needed (no 
families, qualifiers, values or timestamps), to use this, add a FilterList with 
a MUST_PASS_ALL operator to the scanner using setFilter. The filter list should 
include both a FirstKeyOnlyFilter and a KeyOnlyFilter. 
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html





> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-08 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649165#comment-15649165
 ] 

Vrushali C commented on YARN-5739:
--

Thanks [~gtCarrera9] for the patch. I wish to add to [~varun_saxena]'s review 
suggestions.

- agree with suggestion to rename the rest endpoint, but I think we should not 
use "-" in the rest endpoint string. So perhaps something like {quote} 
/apps/{appid}/entitytypes {quote}
- Yes we need not set max versions at L159 in EntityTypeReader
- WRT caching, I am wondering, if there might be a query coming next for the 
details of these entity types?

For the scan/filter, I have a different suggestion:

Looks like we want to return only entity types. Entity types are part of row 
keys, we don't need the column qualifiers and values in that case. So we can 
consider using the KeyOnlyFilter filter.   This is a filter that will only 
return the key component of each KV (the value will be rewritten as empty). 
This filter can be used to grab all of the keys without having to also grab the 
values.  When performing a table scan where only the row keys are needed (no 
families, qualifiers, values or timestamps), to use this, add a FilterList with 
a MUST_PASS_ALL operator to the scanner using setFilter. The filter list should 
include both a FirstKeyOnlyFilter and a KeyOnlyFilter. 
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html





> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5783) Verify applications are identified starved

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649109#comment-15649109
 ] 

Hadoop QA commented on YARN-5783:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 5 new + 85 unchanged - 0 fixed = 90 total (was 85) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 40m 
19s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5783 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838054/yarn-5783.YARN-4752.8.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux de6025a72f05 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-4752 / b425ca2 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13831/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13831/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13831/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Verify applications are identified starved
> --
>
> Key: YARN-5783

[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649101#comment-15649101
 ] 

Hudson commented on YARN-5833:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10791 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10791/])
YARN-5833. Add validation to ensure default ports are unique in (subru: rev 
29e3b3417c16c83dc8e753f94d7ca9957dddbedd)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationFieldsBase.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/test/java/org/apache/hadoop/yarn/conf/TestYarnConfigurationFields.java


> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649096#comment-15649096
 ] 

Konstantinos Karanasos commented on YARN-5833:
--

Thanks for reviewing and committing the patch, [~subru]!

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-08 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15649082#comment-15649082
 ] 

Jian He commented on YARN-4597:
---

I see.. thanks for looking into it.. I guess the killedBeforeStart flag could 
be avoided, by calling  "container.isMarkedToKill()" directly in the 
handleContainerExitCode method like below ?
{code}
  if (!container.isMarkedToKill()) {
dispatcher.getEventHandler().handle(
new ContainerExitEvent(containerId,
ContainerEventType.CONTAINER_KILLED_ON_REQUEST, exitCode,
diagnosticInfo.toString()));
  }
{code}

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5823:
-
Attachment: YARN-5823.004.patch

Attaching the right patch.

> Update NMTokens in case of requests with only opportunistic containers
> --
>
> Key: YARN-5823
> URL: https://issues.apache.org/jira/browse/YARN-5823
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5823.001.patch, YARN-5823.002.patch, 
> YARN-5823.003.patch, YARN-5823.004.patch
>
>
> At the moment, when an {{AllocateRequest}} contains only opportunistic 
> {{ResourceRequests}}, the updated NMTokens are not properly added to the 
> {{AllocateResponse}}.
> In such a case the AM does not get back the needed NMTokens that are required 
> to start the opportunistic containers at the respective nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5823:
-
Attachment: (was: YARN-5823.004.patch)

> Update NMTokens in case of requests with only opportunistic containers
> --
>
> Key: YARN-5823
> URL: https://issues.apache.org/jira/browse/YARN-5823
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5823.001.patch, YARN-5823.002.patch, 
> YARN-5823.003.patch, YARN-5823.004.patch
>
>
> At the moment, when an {{AllocateRequest}} contains only opportunistic 
> {{ResourceRequests}}, the updated NMTokens are not properly added to the 
> {{AllocateResponse}}.
> In such a case the AM does not get back the needed NMTokens that are required 
> to start the opportunistic containers at the respective nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5833:
-
Component/s: yarn

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5833) Add validation to ensure default ports are unique in Configuration

2016-11-08 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5833:
-
Summary: Add validation to ensure default ports are unique in Configuration 
 (was: Change default port for AMRMProxy)

> Add validation to ensure default ports are unique in Configuration
> --
>
> Key: YARN-5833
> URL: https://issues.apache.org/jira/browse/YARN-5833
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5833.003.patch, YARN-5883.001.patch, 
> YARN-5883.002.patch
>
>
> The default port for the AMRMProxy coincides with the one for the Collector 
> Service (port 8048). Will use a different port for the AMRMProxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-08 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648992#comment-15648992
 ] 

Arun Suresh commented on YARN-4597:
---

[~jianhe], tried another round at refactoring out the 'killedBeforeStart' flag 
from the ContainerLaunch. I now feel, it should not be clubbed with the 
'shouldLaunchContainer' flag. They signify very different states: the former is 
used to ensure the container is never started while the latter is used to 
capture the event that the container has already been launch, and thereby 
'should not be launched again'. I feel a better name for 
'shouldLaunchContainer' should have been 'containerAlreadyLaunched'.

I have raised YARN-5860 and YARN-5861 to track the remaining tasks need to get 
this truly feature complete. If [~jianhe] is fine with the current state of 
this patch, I would like to check this in. It would be nice though if 
[~kkaranasos] also takes a look at this to verify if I havn't missed anything 
in the {{ContainerScheduler}} functionality vis-a-vis the 
{{QueuingContainerManagerImpl}}

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5861) Add support for recovery of queued opportunistic containers in the NM.

2016-11-08 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-5861:
-

 Summary: Add support for recovery of queued opportunistic 
containers in the NM.
 Key: YARN-5861
 URL: https://issues.apache.org/jira/browse/YARN-5861
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh


Currently, the NM stateStore marks a container as QUEUED but they are ignored 
(deemed lost) if the container had not started before the NM went down. These 
containers should ideally be re-queued when the NM restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5783) Verify applications are identified starved

2016-11-08 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648966#comment-15648966
 ] 

Wangda Tan commented on YARN-5783:
--

[~kasha], [~templedf],

Haven't looked at the patch, and not sure if you thought this before: could we 
make the "app starvation level" as an general information which we can get from 
different scheduler on YARN. Which can help 1) users to understand apps get 
starved 2) scheduler can make decisions such as preemption.

Overall, I think "starved" should be a scheduler-agnostic scheduling state of 
an app.

I'm not trying to stop what you're doing now, please go ahead with what you 
have, I just wanted to share some ideas. 

> Verify applications are identified starved
> --
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch, yarn-5783.YARN-4752.5.patch, 
> yarn-5783.YARN-4752.6.patch, yarn-5783.YARN-4752.7.patch, 
> yarn-5783.YARN-4752.8.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5356) NodeManager should communicate physical resource capability to ResourceManager

2016-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648970#comment-15648970
 ] 

Hudson commented on YARN-5356:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10790 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10790/])
YARN-5356. NodeManager should communicate physical resource capability (jlowe: 
rev 3f93ac0733058238a2c8f23960c986c71dca0e02)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/scheduler/RMNodeWrapper.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceTrackerService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeResourceMonitorImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/ResourceCalculatorPlugin.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/proto/yarn_server_common_service_protos.proto
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/NodeStatusUpdaterImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/RegisterNodeManagerRequest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/api/protocolrecords/impl/pb/RegisterNodeManagerRequestPBImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/monitor/ContainersMonitorImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNode.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockNodes.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/TestYarnServerApiClasses.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/nodemanager/NodeInfo.java


> NodeManager should communicate physical resource capability to ResourceManager
> --
>
> Key: YARN-5356
> URL: https://issues.apache.org/jira/browse/YARN-5356
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Nathan Roberts
>Assignee: Inigo Goiri
>  Labels: oct16-medium
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5356.000.patch, YARN-5356.001.patch, 
> YARN-5356.002.patch, YARN-5356.002.patch, YARN-5356.003.patch, 
> YARN-5356.004.patch, YARN-5356.005.patch, YARN-5356.006.patch, 
> YARN-5356.007.patch, YARN-5356.008.patch, YARN-5356.009.patch, 
> YARN-5356.010.patch, YARN-5356.011.patch
>
>
> Currently ResourceUtilization contains absolute quantities of resource used 
> (e.g. 4096MB memory used). It would be good if the NM also communicated the 
> actual physical resource capabilities of the node so that the RM can use this 
> data to schedule more effectively (overcommit, etc)
> Currently the only available information is the Resource the node registered 
> with (or later updated using updateNodeResource). However, these aren't 
> really sufficient to get a good view of how utilized a resource is. For 
> example, if a node reports 400% CPU utilization, does that mean it's 
> completely full, or barely utilized? Today there is no reliable way to figure 
> this out.
> [~elgoiri] - Lots of good work is happening in YARN-2965 so curious if you 
> have thoughts/opinions on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5860) Add support for increase and decrease of container resources to NM Container Queuing

2016-11-08 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-5860:
-

 Summary: Add support for increase and decrease of container 
resources to NM Container Queuing 
 Key: YARN-5860
 URL: https://issues.apache.org/jira/browse/YARN-5860
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Arun Suresh
Assignee: Arun Suresh


Currently the queuing framework (introduced in YARN-2877) in the NM that 
handles opportunistic containers, are pre-empts opportunistic containers only 
when resources are need to start guaranteed containers.
It currently does not handle situations where a guaranteed container resources 
have been increased. Conversely, if a guaranteed (or opportunistic) container's 
resources have been decreased, the NM must start queued opportunistic 
containers waiting on the newly available resources.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648958#comment-15648958
 ] 

Hadoop QA commented on YARN-5611:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 10m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
39s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 55s{color} | {color:orange} root: The patch generated 15 new + 766 unchanged 
- 3 fixed = 781 total (was 769) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 14m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
41s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m  3s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}113m  
7s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
55s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}268m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5611 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838020/YARN-5611.0008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (YARN-5761) Separate QueueManager from Scheduler

2016-11-08 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648944#comment-15648944
 ] 

Subru Krishnan commented on YARN-5761:
--

First up, +1 on the proposal but shouldn't we first close the action items from 
our discussion summarized 
[here|https://issues.apache.org/jira/browse/YARN-5734?focusedCommentId=15612431]
 in YARN-5734 before we start adding patches? [~leftnoteasy]/[~xgong], thoughts?

> Separate QueueManager from Scheduler
> 
>
> Key: YARN-5761
> URL: https://issues.apache.org/jira/browse/YARN-5761
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Xuan Gong
>Assignee: Xuan Gong
>  Labels: oct16-medium
> Attachments: YARN-5761.1.patch, YARN-5761.1.rebase.patch, 
> YARN-5761.2.patch
>
>
> Currently, in scheduler code, we are doing queue manager and scheduling work. 
> We'd better separate the queue manager out of scheduler logic. In that case, 
> it would be much easier and safer to extend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5783) Verify applications are identified starved

2016-11-08 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5783:
---
Attachment: yarn-5783.YARN-4752.8.patch

My bad, didn't realize ApplicationAttemptId already implemented hashCode. 
Removing changes to ApplicationAttemptIdPBImpl. 

> Verify applications are identified starved
> --
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch, yarn-5783.YARN-4752.5.patch, 
> yarn-5783.YARN-4752.6.patch, yarn-5783.YARN-4752.7.patch, 
> yarn-5783.YARN-4752.8.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648928#comment-15648928
 ] 

Hadoop QA commented on YARN-5823:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 42 unchanged - 1 fixed = 42 total (was 43) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
37s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
10s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 24s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
36s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
49s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5823 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838038/YARN-5823.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 278fe5c01b2c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbb133c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 

[jira] [Commented] (YARN-5356) NodeManager should communicate physical resource capability to ResourceManager

2016-11-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648923#comment-15648923
 ] 

Jason Lowe commented on YARN-5356:
--

Thanks for updating the patch!  The unit test failures appear to be unrelated.  
The TestQueuingContainerManager failure is tracked by YARN-5377.  The 
TestAMRestart failure is tracked by YARN-5043.  I filed YARN-5859 for the 
TestResourceLocalization failure.

+1 for the latest patch.  Committing this.


> NodeManager should communicate physical resource capability to ResourceManager
> --
>
> Key: YARN-5356
> URL: https://issues.apache.org/jira/browse/YARN-5356
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Nathan Roberts
>Assignee: Inigo Goiri
>  Labels: oct16-medium
> Attachments: YARN-5356.000.patch, YARN-5356.001.patch, 
> YARN-5356.002.patch, YARN-5356.002.patch, YARN-5356.003.patch, 
> YARN-5356.004.patch, YARN-5356.005.patch, YARN-5356.006.patch, 
> YARN-5356.007.patch, YARN-5356.008.patch, YARN-5356.009.patch, 
> YARN-5356.010.patch, YARN-5356.011.patch
>
>
> Currently ResourceUtilization contains absolute quantities of resource used 
> (e.g. 4096MB memory used). It would be good if the NM also communicated the 
> actual physical resource capabilities of the node so that the RM can use this 
> data to schedule more effectively (overcommit, etc)
> Currently the only available information is the Resource the node registered 
> with (or later updated using updateNodeResource). However, these aren't 
> really sufficient to get a good view of how utilized a resource is. For 
> example, if a node reports 400% CPU utilization, does that mean it's 
> completely full, or barely utilized? Today there is no reliable way to figure 
> this out.
> [~elgoiri] - Lots of good work is happening in YARN-2965 so curious if you 
> have thoughts/opinions on this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5859) TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource sometimes fails

2016-11-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648913#comment-15648913
 ] 

Jason Lowe commented on YARN-5859:
--

The test output:
{noformat}
2016-11-07 20:00:01,393 INFO  [Thread-275] event.AsyncDispatcher 
(AsyncDispatcher.java:register(213)) - Registering class 
org.apache.hadoop.yarn.server.nodemanager.containermanager.application.ApplicationEventType
 for class 
org.apache.hadoop.yarn.event.EventHandler$$EnhancerByMockitoWithCGLIB$$f197772f
2016-11-07 20:00:01,394 INFO  [Thread-275] event.AsyncDispatcher 
(AsyncDispatcher.java:register(213)) - Registering class 
org.apache.hadoop.yarn.server.nodemanager.containermanager.container.ContainerEventType
 for class 
org.apache.hadoop.yarn.event.EventHandler$$EnhancerByMockitoWithCGLIB$$f197772f
2016-11-07 20:00:01,403 INFO  [Thread-275] nodemanager.DirectoryCollection 
(DirectoryCollection.java:(185)) - Disk Validator: 
yarn.nodemanager.disk-validator is loaded.
2016-11-07 20:00:01,411 INFO  [Thread-275] nodemanager.DirectoryCollection 
(DirectoryCollection.java:(185)) - Disk Validator: 
yarn.nodemanager.disk-validator is loaded.
2016-11-07 20:00:01,554 INFO  [Thread-275] event.AsyncDispatcher 
(AsyncDispatcher.java:register(213)) - Registering class 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizationEventType
 for class 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$$EnhancerByMockitoWithCGLIB$$9a46a6a4
2016-11-07 20:00:01,555 INFO  [Thread-275] 
localizer.ResourceLocalizationService 
(ResourceLocalizationService.java:validateConf(232)) - per directory file limit 
= 8192
2016-11-07 20:00:01,596 INFO  [Thread-275] 
localizer.ResourceLocalizationService 
(ResourceLocalizationService.java:serviceInit(260)) - Disk Validator: 
yarn.nodemanager.disk-validator is loaded.
2016-11-07 20:00:01,598 INFO  [Thread-275] event.AsyncDispatcher 
(AsyncDispatcher.java:register(213)) - Registering class 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventType
 for class 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService$LocalizerTracker
2016-11-07 20:00:01,612 INFO  [AsyncDispatcher event handler] 
localizer.ResourceLocalizationService 
(ResourceLocalizationService.java:addResource(845)) - Downloading public rsrc:{ 
/tmp, 123, FILE,  }
{noformat}

I'm guessing the 200 millisecond timeout is too short sometimes if the unit 
test is running in a slow VM or there are other performance hiccups (GC, etc.).

> TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource 
> sometimes fails
> -
>
> Key: YARN-5859
> URL: https://issues.apache.org/jira/browse/YARN-5859
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Jason Lowe
>
> Saw the following test failure:
> {noformat}
> Running 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.011 sec 
> <<< FAILURE! - in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
> testParallelDownloadAttemptsForPublicResource(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService)
>   Time elapsed: 0.586 sec  <<< FAILURE!
> java.lang.AssertionError: null
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testParallelDownloadAttemptsForPublicResource(TestResourceLocalizationService.java:2108)
> {noformat}
> The assert occurred at this place in the code:
> {code}
>   // Waiting for download to start.
>   Assert.assertTrue(waitForPublicDownloadToStart(spyService, 1, 200));
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5859) TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource sometimes fails

2016-11-08 Thread Jason Lowe (JIRA)
Jason Lowe created YARN-5859:


 Summary: 
TestResourceLocalizationService#testParallelDownloadAttemptsForPublicResource 
sometimes fails
 Key: YARN-5859
 URL: https://issues.apache.org/jira/browse/YARN-5859
 Project: Hadoop YARN
  Issue Type: Bug
  Components: test
Reporter: Jason Lowe


Saw the following test failure:
{noformat}
Running 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 12.011 sec <<< 
FAILURE! - in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService
testParallelDownloadAttemptsForPublicResource(org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService)
  Time elapsed: 0.586 sec  <<< FAILURE!
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.TestResourceLocalizationService.testParallelDownloadAttemptsForPublicResource(TestResourceLocalizationService.java:2108)
{noformat}
The assert occurred at this place in the code:
{code}
  // Waiting for download to start.
  Assert.assertTrue(waitForPublicDownloadToStart(spyService, 1, 200));
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-08 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648821#comment-15648821
 ] 

Jian He commented on YARN-4597:
---

bq. I tried to reuse shouldLaunchContainer for the killedBeforeStart flag, but 
I see too many test failures, since a lot of the code expects the 
CONTAINER_LAUNCH event to also be raised. If its ok with you, I will give it 
another look once we converge on everything else.

Sure, other things look good to me.

> Add SCHEDULE to NM container lifecycle
> --
>
> Key: YARN-4597
> URL: https://issues.apache.org/jira/browse/YARN-4597
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Chris Douglas
>Assignee: Arun Suresh
>  Labels: oct16-hard
> Attachments: YARN-4597.001.patch, YARN-4597.002.patch, 
> YARN-4597.003.patch, YARN-4597.004.patch, YARN-4597.005.patch, 
> YARN-4597.006.patch, YARN-4597.007.patch
>
>
> Currently, the NM immediately launches containers after resource 
> localization. Several features could be more cleanly implemented if the NM 
> included a separate stage for reserving resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5858) TestDiskFailures.testLogDirsFailures fails on trunk

2016-11-08 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-5858:
--

 Summary: TestDiskFailures.testLogDirsFailures fails on trunk
 Key: YARN-5858
 URL: https://issues.apache.org/jira/browse/YARN-5858
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Varun Saxena
Priority: Minor


{noformat}
java.lang.AssertionError: NodeManager could not identify disk failure.
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.yarn.server.TestDiskFailures.verifyDisksHealth(TestDiskFailures.java:239)
at 
org.apache.hadoop.yarn.server.TestDiskFailures.testDirsFailures(TestDiskFailures.java:202)
at 
org.apache.hadoop.yarn.server.TestDiskFailures.testLogDirsFailures(TestDiskFailures.java:111)
{noformat}

Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13828/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4330) MiniYARNCluster is showing multiple Failed to instantiate default resource calculator warning messages.

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648688#comment-15648688
 ] 

Hadoop QA commented on YARN-4330:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m  
1s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 74 unchanged - 0 fixed = 75 total (was 74) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
30s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
4s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  4m 51s{color} 
| {color:red} hadoop-yarn-server-tests in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.TestDiskFailures |
|   | hadoop.yarn.server.TestContainerManagerSecurity |
|   | hadoop.yarn.server.TestMiniYarnClusterNodeUtilization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-4330 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838033/YARN-4330.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 7427c8f241a5 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbb133c |
| Default Java | 

[jira] [Created] (YARN-5857) TestLogAggregationService.testFixedSizeThreadPool fails on trunk

2016-11-08 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-5857:
--

 Summary: TestLogAggregationService.testFixedSizeThreadPool fails 
on trunk
 Key: YARN-5857
 URL: https://issues.apache.org/jira/browse/YARN-5857
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Varun Saxena
Priority: Minor


{noformat}
testFixedSizeThreadPool(org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService)
  Time elapsed: 0.11 sec  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService.testFixedSizeThreadPool(TestLogAggregationService.java:1139)
{noformat}

Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13829/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5857) TestLogAggregationService.testFixedSizeThreadPool fails intermittently on trunk

2016-11-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5857:
---
Summary: TestLogAggregationService.testFixedSizeThreadPool fails 
intermittently on trunk  (was: 
TestLogAggregationService.testFixedSizeThreadPool fails on trunk)

> TestLogAggregationService.testFixedSizeThreadPool fails intermittently on 
> trunk
> ---
>
> Key: YARN-5857
> URL: https://issues.apache.org/jira/browse/YARN-5857
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Priority: Minor
>
> {noformat}
> testFixedSizeThreadPool(org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService)
>   Time elapsed: 0.11 sec  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<2>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService.testFixedSizeThreadPool(TestLogAggregationService.java:1139)
> {noformat}
> Refer to https://builds.apache.org/job/PreCommit-YARN-Build/13829/testReport/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5823) Update NMTokens in case of requests with only opportunistic containers

2016-11-08 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5823:
-
Attachment: YARN-5823.004.patch

Rebasing against trunk and fixing the failing test cases.

> Update NMTokens in case of requests with only opportunistic containers
> --
>
> Key: YARN-5823
> URL: https://issues.apache.org/jira/browse/YARN-5823
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
>Priority: Blocker
> Attachments: YARN-5823.001.patch, YARN-5823.002.patch, 
> YARN-5823.003.patch, YARN-5823.004.patch
>
>
> At the moment, when an {{AllocateRequest}} contains only opportunistic 
> {{ResourceRequests}}, the updated NMTokens are not properly added to the 
> {{AllocateResponse}}.
> In such a case the AM does not get back the needed NMTokens that are required 
> to start the opportunistic containers at the respective nodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-08 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648602#comment-15648602
 ] 

Sunil G commented on YARN-5611:
---

Thanks [~rohithsharma] for the patch..

*Few more comments.*
1. {{ApplicationClientProtocol#updateApplicationTimeouts}} .Could this be an 
Evolving api?.
2. {{ApplicationClientProtocolPBClientImpl#updateApplicationTimeouts}}. Does 
exception handling block needs return? RPCUtil method will throw exception, 
correct?
3. In {{ApplicationClientProtocolPBServiceImpl#updateApplicationTimeouts}}, we 
use {{catch (YarnException| IOException e)}}.
4. On a different note, i think COMPLETED_APP_STATES could be defined by 
RMAppImpl itself and expose a read-only api. This can help to cleanup local 
states definitions. could be done in another patch.
5. Given a writeLock in {{RMAppImpl#updateApplicationTimeout}}, why do we need 
another lock in RMAppManager#updateApplicationTimeout. Is this to handle some 
race conditions while app update event is waiting in StateStore dispatcher 
queue? I would love to have some more comments in these synchronized blocks or 
write locks to give a brief explanation. It will help us later.
6. RMApp is generally considered as read-only. updateApplicationTimeout will 
violate that. we can place this api in RMAppImpl itself, and in client side, we 
could convert to RMAppImpl object and use. ProportionPolicy, New Global 
Scheduler etc are using this way.
7.  Timeout is to be part of ApplicationReport correct? Is that a part of this 
patch?

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4355) NPE while processing localizer heartbeat

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648599#comment-15648599
 ] 

Hadoop QA commented on YARN-4355:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 232 unchanged - 6 fixed = 235 total (was 238) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 51s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-4355 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838034/YARN-4355.05.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7dd455c2d9bb 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbb133c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13829/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13829/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13829/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 

[jira] [Updated] (YARN-4355) NPE while processing localizer heartbeat

2016-11-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4355:
---
Attachment: YARN-4355.05.patch

Attaching a patch fixing 2 nits pointed out by Naga above.

> NPE while processing localizer heartbeat
> 
>
> Key: YARN-4355
> URL: https://issues.apache.org/jira/browse/YARN-4355
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Varun Saxena
> Attachments: YARN-4355.01.patch, YARN-4355.02.patch, 
> YARN-4355.03.patch, YARN-4355.04.patch, YARN-4355.05.patch
>
>
> While analyzing YARN-4354 I noticed a nodemanager was getting NPEs while 
> processing a private localizer heartbeat.  I think there's a race where we 
> can cleanup resources for an application and therefore remove the app local 
> resource tracker just as we are trying to handle the localizer heartbeat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4330) MiniYARNCluster is showing multiple Failed to instantiate default resource calculator warning messages.

2016-11-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4330:
---
Attachment: YARN-4330.003.patch

Uploading a patch after changing isEnabled to isHardwareDetectionEnabled

> MiniYARNCluster is showing multiple  Failed to instantiate default resource 
> calculator warning messages.
> 
>
> Key: YARN-4330
> URL: https://issues.apache.org/jira/browse/YARN-4330
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test, yarn
>Affects Versions: 2.8.0
> Environment: OSX, JUnit
>Reporter: Steve Loughran
>Assignee: Varun Saxena
>Priority: Blocker
>  Labels: oct16-hard
> Attachments: YARN-4330.002.patch, YARN-4330.003.patch, 
> YARN-4330.01.patch
>
>
> Whenever I try to start a MiniYARNCluster on Branch-2 (commit #0b61cca), I 
> see multiple stack traces warning me that a resource calculator plugin could 
> not be created
> {code}
> (ResourceCalculatorPlugin.java:getResourceCalculatorPlugin(184)) - 
> java.lang.UnsupportedOperationException: Could not determine OS: Failed to 
> instantiate default resource calculator.
> java.lang.UnsupportedOperationException: Could not determine OS
> {code}
> This is a minicluster. It doesn't need resource calculation. It certainly 
> doesn't need test logs being cluttered with even more stack traces which will 
> only generate false alarms about tests failing. 
> There needs to be a way to turn this off, and the minicluster should have it 
> that way by default.
> Being ruthless and marking as a blocker, because its a fairly major 
> regression for anyone testing with the minicluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648473#comment-15648473
 ] 

Varun Saxena commented on YARN-5739:


Thanks [~gtCarrera9] for the patch.
Few comments.

# REST endpoint {{/apps/\{appid\}/entities}} sounds more like fetching all 
entities across entity types. Should it be {{/apps/\{appid\}/entity-types}}
# Instead of using EntityTypeReader#getNextRowKey, we can use the method Rohith 
has copied over from HBase code to GenericEntityReader in YARN-5585. I had 
asked him to move it to TimelineStorageUtils. Once YARN-5585 goes in, you can 
use that.
# In the Scan object, should we use Scan#setCaching(1) as we do not need more 
than one row for every entity type. We can also use PageFilter of size 1.
# In EntityTypeReader#getResult, setting Scan#setMaxVersions is not required. 
Default value of 1 should do.
# In EntityTypeReader#readResults, is setting the PrefixFilter necessary ? 
Shouldn't we be using Scan#setRowPrefixFilter and pass the start row in that.
# Depends on perspective but EntityTypeReader returning a set of entities looks 
somewhat weird. Do we need to make it a subclass of GenericEntityReader just 
for reusing parts of augmentParams ? 
# Name the relevant methods in Webservices, Reader manager and reader 
implementation consistent i.e. either getEntityTypes or listEntityTypes ?

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5849) Automatically create YARN control group for pre-mounted cgroups

2016-11-08 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648465#comment-15648465
 ] 

Miklos Szegedi commented on YARN-5849:
--

Thank you for the comment [~bibinchundatt]!

1. The patch applies to the scenario, when enable mount is false. The group is 
created in other cases. The current implementation throws an exception if any 
controller does not have the group created and writable. What is your more 
specific concern?
2. If I put the line into the else branch, we would miss the ret.put call in 
case the group was just created. I could put ret.put inside both branches, but 
that would make the code more complicated I think. Does this answer your 
concern?

> Automatically create YARN control group for pre-mounted cgroups
> ---
>
> Key: YARN-5849
> URL: https://issues.apache.org/jira/browse/YARN-5849
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.7.3, 3.0.0-alpha1, 3.0.0-alpha2
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-5849.000.patch, YARN-5849.001.patch
>
>
> Yarn can be launched with linux-container-executor.cgroups.mount set to 
> false. It will search for the cgroup mount paths set up by the administrator 
> parsing the /etc/mtab file. You can also specify 
> resource.percentage-physical-cpu-limit to limit the CPU resources assigned to 
> containers.
> linux-container-executor.cgroups.hierarchy is the root of the settings of all 
> YARN containers. If this is specified but not created YARN will fail at 
> startup:
> Caused by: java.io.FileNotFoundException: 
> /cgroups/cpu/hadoop-yarn/cpu.cfs_period_us (Permission denied)
> org.apache.hadoop.yarn.server.nodemanager.util.CgroupsLCEResourcesHandler.updateCgroup(CgroupsLCEResourcesHandler.java:263)
> This JIRA is about automatically creating YARN control group in the case 
> above. It reduces the cost of administration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5856) Unnecessary duplicate start container request sent to NM State store

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648458#comment-15648458
 ] 

Hadoop QA commented on YARN-5856:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
40s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5856 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12838023/YARN-5856.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9daee22e31f8 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / dbb133c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13827/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13827/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Unnecessary duplicate start container request sent to NM State store
> 
>
> Key: YARN-5856
> URL: https://issues.apache.org/jira/browse/YARN-5856
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: 

[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648429#comment-15648429
 ] 

Hadoop QA commented on YARN-5739:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
15s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
48s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
33s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
9s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 4 new + 
29 unchanged - 1 fixed = 33 total (was 30) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
48s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m  
8s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5739 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836696/YARN-5739-YARN-5355.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8dfdc2fd702f 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 25b1917 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13826/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13826/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 

[jira] [Commented] (YARN-5856) Unnecessary duplicate start container request sent to NM State store

2016-11-08 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648419#comment-15648419
 ] 

Naganarasimha G R commented on YARN-5856:
-

Trivial fix will wait for the jenkins run and later commit! 

> Unnecessary duplicate start container request sent to NM State store
> 
>
> Key: YARN-5856
> URL: https://issues.apache.org/jira/browse/YARN-5856
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5856.01.patch
>
>
> In ContainerManagerImpl#startContainerInternal, a duplicate store container 
> request is sent to NM State store which is unnecessary.
> {code}
> this.context.getNMStateStore().storeContainer(containerId,
> containerTokenIdentifier.getVersion(), request);
> dispatcher.getEventHandler().handle(
>   new ApplicationContainerInitEvent(container));
> this.context.getNMStateStore().storeContainer(containerId,
> containerTokenIdentifier.getVersion(), request);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5856) Unnecessary duplicate start container request sent to NM State store

2016-11-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5856:
---
Attachment: YARN-5856.01.patch

> Unnecessary duplicate start container request sent to NM State store
> 
>
> Key: YARN-5856
> URL: https://issues.apache.org/jira/browse/YARN-5856
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Saxena
>Assignee: Varun Saxena
> Attachments: YARN-5856.01.patch
>
>
> In ContainerManagerImpl#startContainerInternal, a duplicate store container 
> request is sent to NM State store which is unnecessary.
> {code}
> this.context.getNMStateStore().storeContainer(containerId,
> containerTokenIdentifier.getVersion(), request);
> dispatcher.getEventHandler().handle(
>   new ApplicationContainerInitEvent(container));
> this.context.getNMStateStore().storeContainer(containerId,
> containerTokenIdentifier.getVersion(), request);
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5856) Unnecessary duplicate start container request sent to NM State store

2016-11-08 Thread Varun Saxena (JIRA)
Varun Saxena created YARN-5856:
--

 Summary: Unnecessary duplicate start container request sent to NM 
State store
 Key: YARN-5856
 URL: https://issues.apache.org/jira/browse/YARN-5856
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Varun Saxena
Assignee: Varun Saxena


In ContainerManagerImpl#startContainerInternal, a duplicate store container 
request is sent to NM State store which is unnecessary.
{code}
this.context.getNMStateStore().storeContainer(containerId,
containerTokenIdentifier.getVersion(), request);
dispatcher.getEventHandler().handle(
  new ApplicationContainerInitEvent(container));
this.context.getNMStateStore().storeContainer(containerId,
containerTokenIdentifier.getVersion(), request);
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-08 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15648321#comment-15648321
 ] 

Sangjin Lee commented on YARN-5739:
---

Kicked off jenkins again after the rebase.

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-08 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5611:

Attachment: YARN-5611.0008.patch

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.0008.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5810) RM Loglevel setting shouldn't return a valid value for a non-existing class

2016-11-08 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5810:
---
Labels: log-level newbie  (was: )

> RM Loglevel setting shouldn't return a valid value for a non-existing class
> ---
>
> Key: YARN-5810
> URL: https://issues.apache.org/jira/browse/YARN-5810
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>  Labels: log-level, newbie
>
> The WebUI of RM Loglevel setting should not return a valid value, like INFO 
> for an non-existing class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5810) RM Loglevel setting shouldn't return a valid value for a non-existing class

2016-11-08 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5810:
---
Assignee: (was: Yufei Gu)

> RM Loglevel setting shouldn't return a valid value for a non-existing class
> ---
>
> Key: YARN-5810
> URL: https://issues.apache.org/jira/browse/YARN-5810
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>  Labels: log-level, newbie
>
> The WebUI of RM Loglevel setting should not return a valid value, like INFO 
> for an non-existing class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5855) DELETE call sometimes returns success when app is not deleted

2016-11-08 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-5855:


 Summary: DELETE call sometimes returns success when app is not 
deleted
 Key: YARN-5855
 URL: https://issues.apache.org/jira/browse/YARN-5855
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Billie Rinaldi
Assignee: Gour Saha


Looking into this issue with [~gsaha], we noticed that multiple things can 
contribute to an app continuing to run after a DELETE call, which consists of a 
stop and a destroy operation. One problem is that the stop call is asynchronous 
unless a force flag is set. Without the force flag, a message is sent to the AM 
and success is returned, and with the flag yarnClient.killRunningApplication is 
called. (There is also an option to wait for a fixed amount of time for the app 
to stop before returning, but DELETE is not setting this option and force is 
preferable in this case.) The other issue is that the destroy operation is 
attempted in a loop, but if the number of retries is exceeded the call returns 
a 204 response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3955) Support for priority ACLs in CapacityScheduler

2016-11-08 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-3955:
--
Attachment: YARN-3955.v0.patch

Attaching correct v0 patch. [~leftnoteasy] [~jianhe] pls share your thoughts.

> Support for priority ACLs in CapacityScheduler
> --
>
> Key: YARN-3955
> URL: https://issues.apache.org/jira/browse/YARN-3955
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: ApplicationPriority-ACL.pdf, 
> ApplicationPriority-ACLs-v2.pdf, YARN-3955.v0.patch, YARN-3955.wip1.patch
>
>
> Support will be added for User-level access permission to use different 
> application-priorities. This is to avoid situations where all users try 
> running max priority in the cluster and thus degrading the value of 
> priorities.
> Access Control Lists can be set per priority level within each queue. Below 
> is an example configuration that can be added in capacity scheduler 
> configuration
> file for each Queue level.
> yarn.scheduler.capacity.root...acl=user1,user2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3955) Support for priority ACLs in CapacityScheduler

2016-11-08 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-3955:
--
Attachment: (was: YARN-3955.v0.patch)

> Support for priority ACLs in CapacityScheduler
> --
>
> Key: YARN-3955
> URL: https://issues.apache.org/jira/browse/YARN-3955
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.7.1
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: ApplicationPriority-ACL.pdf, 
> ApplicationPriority-ACLs-v2.pdf, YARN-3955.wip1.patch
>
>
> Support will be added for User-level access permission to use different 
> application-priorities. This is to avoid situations where all users try 
> running max priority in the cluster and thus degrading the value of 
> priorities.
> Access Control Lists can be set per priority level within each queue. Below 
> is an example configuration that can be added in capacity scheduler 
> configuration
> file for each Queue level.
> yarn.scheduler.capacity.root...acl=user1,user2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow whitelisted volume mounts

2016-11-08 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647920#comment-15647920
 ] 

Varun Vasudev commented on YARN-5534:
-

[~luhuichun] - can you please address the issues in the Jenkins report -
1) Please add some unit tests for the patch
2) Please address the failing unit test

Thanks!

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: luhuichun
> Attachments: YARN-5534.001.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647732#comment-15647732
 ] 

Hadoop QA commented on YARN-5611:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 50s{color} | {color:orange} root: The patch generated 18 new + 766 unchanged 
- 3 fixed = 784 total (was 769) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
32s{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
41s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 
28s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 40m 21s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}112m  
0s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
55s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}260m 38s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5611 |
| JIRA Patch URL | 

[jira] [Commented] (YARN-5843) Documentation wrong for entityType/events rest end point

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647560#comment-15647560
 ] 

Hadoop QA commented on YARN-5843:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:e809691 |
| JIRA Issue | YARN-5843 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12837973/YARN-5843.0002.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 6cf6962fe4cf 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 026b39a |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13824/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Documentation wrong for entityType/events rest end point
> 
>
> Key: YARN-5843
> URL: https://issues.apache.org/jira/browse/YARN-5843
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: YARN-5843.0001.patch, YARN-5843.0002.patch
>
>
> http(s):// address:port>/ws/v1/timeline/{entityType}/events
> {noformat}
> entityIds - The entity IDs to retrieve events for.
> limit - A limit on the number of events to return for each entity. If null, 
> defaults to 100 events per entity.
> windowStart - If not null, retrieves only events later than the given time 
> (exclusive)
> windowEnd - If not null, retrieves only events earlier than the given time 
> (inclusive)
> eventTypes - Restricts the events returned to the given types. If null, 
> events of all types will be returned.
> {noformat}
> parameter should be
> *entityId*
> *eventType*
> Mention  comma separated *entityId* and *entitytype* for multiple arguments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5843) Documentation wrong for entityType/events rest end point

2016-11-08 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5843:
---
Attachment: YARN-5843.0002.patch

Thank you [~varun_saxena] for comments.
Updating patch handling all comments

> Documentation wrong for entityType/events rest end point
> 
>
> Key: YARN-5843
> URL: https://issues.apache.org/jira/browse/YARN-5843
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Minor
> Attachments: YARN-5843.0001.patch, YARN-5843.0002.patch
>
>
> http(s):// address:port>/ws/v1/timeline/{entityType}/events
> {noformat}
> entityIds - The entity IDs to retrieve events for.
> limit - A limit on the number of events to return for each entity. If null, 
> defaults to 100 events per entity.
> windowStart - If not null, retrieves only events later than the given time 
> (exclusive)
> windowEnd - If not null, retrieves only events earlier than the given time 
> (inclusive)
> eventTypes - Restricts the events returned to the given types. If null, 
> events of all types will be returned.
> {noformat}
> parameter should be
> *entityId*
> *eventType*
> Mention  comma separated *entityId* and *entitytype* for multiple arguments



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4597) Add SCHEDULE to NM container lifecycle

2016-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647435#comment-15647435
 ] 

Hadoop QA commented on YARN-4597:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
47s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  2m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  9m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 55s{color} | {color:orange} root: The patch generated 12 new + 1049 
unchanged - 14 fixed = 1061 total (was 1063) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  3m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 235 unchanged - 1 fixed = 235 total (was 236) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-server-tests in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
31s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit 

[jira] [Commented] (YARN-5184) Fix up incompatible changes introduced on ContainerStatus and NodeReport

2016-11-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647241#comment-15647241
 ] 

Steve Loughran commented on YARN-5184:
--

I'd rather not make things abstract for  3.x. 

* Every incompatible API change is another barrier to migration. We don't want 
Hadoop 3 to be python 3.
* Some of us regularly build against -trunk already; I'm doing that with Spark 
and the s3guard branch. Incompatible changes really complicate my life

> Fix up incompatible changes introduced on ContainerStatus and NodeReport
> 
>
> Key: YARN-5184
> URL: https://issues.apache.org/jira/browse/YARN-5184
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 2.8.0, 2.9.0
>Reporter: Karthik Kambatla
>Assignee: Sangjin Lee
>Priority: Blocker
> Attachments: YARN-5184-branch-2.8.poc.patch, 
> YARN-5184-branch-2.poc.patch
>
>
> YARN-2882 and YARN-5430 broke compatibility by adding abstract methods to 
> ContainerStatus. Since ContainerStatus is a Public-Stable class, adding 
> abstract methods to this class breaks any extensions. 
> To fix this, we should add default implementations to these new methods and 
> not leave them as abstract. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5845) Skip aclUpdated event publish to timelineserver or recovery

2016-11-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647195#comment-15647195
 ] 

Varun Saxena commented on YARN-5845:


I think till we send as 2 separate events it should be fine backward 
compatibility wise. If you ask me, there no need for 2 separate events if 
backward compatibility was not to be considered. ACLs' in application 
submission context are not going to change through the lifetime of an 
application as per current implementation.
If ever application ACLs' were to change for a running application in future, 
we can add the required interface in SystemMetricPublisher and publish an ACL 
updated event.
SMP is a private interface so changing it should not be an issue.

Moreover for ATSv2 I think we need not even send an ACL updated event. We can 
simply fill the info in entity info instead of having a separate event and an 
event info. This will aid in easy filtering using info filters.  

cc [~rohithsharma], as you were doing ATSv2 integration with Tez, can you 
confirm if last point would be fine for Tez. IIUC, ATSv2 interaction in Tez is 
a newly written piece of code with no expectation of receiving same events with 
same info as in ATSv1. Even if it was incompatible across, we have nowhere 
claimed the interface and responses to be compatible anyways.

> Skip aclUpdated event publish to timelineserver or recovery
> ---
>
> Key: YARN-5845
> URL: https://issues.apache.org/jira/browse/YARN-5845
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5845.0001.patch, YARN-5845.0002.patch
>
>
> Currently ACL update event is send to timeline server even on recovery 
> {{RMAppManager#createAndPopulateNewRMApp}}.
> For 10K completed application when RM is restarted 10K ACL updated event is 
> added to timelinesever causing unnecessary over loading of system
> {code}
> String appViewACLs = submissionContext.getAMContainerSpec()
> .getApplicationACLs().get(ApplicationAccessType.VIEW_APP);
> rmContext.getSystemMetricsPublisher().appACLsUpdated(
> application, appViewACLs, System.currentTimeMillis());
> {code}
> *Events on each RM restart*
> {noformat}
> "events": [{
> "timestamp": 1478520292543,
> "eventtype": "YARN_APPLICATION_ACLS_UPDATED",
> "eventinfo": {}
> }, {
> "timestamp": 1478519600537,
> "eventtype": "YARN_APPLICATION_ACLS_UPDATED",
> "eventinfo": {}
> }, {
> "timestamp": 1478519557101,
> "eventtype": "YARN_APPLICATION_ACLS_UPDATED",
> "eventinfo": {}
> }, 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5820) yarn node CLI help should be clearer

2016-11-08 Thread Ajith S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15647175#comment-15647175
 ] 

Ajith S commented on YARN-5820:
---

using apache HelpFormatter wraps the character after certain line length, so 
that  is automatically sent to next line after formatting, hence the 
test case is modified to accommodate this

> yarn node CLI help should be clearer
> 
>
> Key: YARN-5820
> URL: https://issues.apache.org/jira/browse/YARN-5820
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: client
>Affects Versions: 2.6.0
>Reporter: Grant Sohn
>Assignee: Ajith S
>Priority: Trivial
> Attachments: YARN-5820.01.patch, YARN-5820.02.patch, 
> YARN-5820.03.patch, YARN-5820.04.patch
>
>
> Current message is:
> {noformat}
> usage: node
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> It should be either this:
> {noformat}
> usage: yarn node [-list [-states |-all] | -status ]
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> or that.
> {noformat}
> usage: yarn node -list [-states |-all] 
>yarn node -status 
>  -all   Works with -list to list all nodes.
>  -list  List all running nodes. Supports optional use of
> -states to filter nodes based on node state, all -all
> to list all nodes.
>  -statesWorks with -list to filter nodes based on input
> comma-separated list of node states.
>  -statusPrints the status report of the node.
> {noformat}
> The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-08 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5611:

Attachment: YARN-5611.0007.patch

Updated patch fixing allowing priority and timeout update parallel.

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.0006.patch, YARN-5611.0007.patch, YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >