[jira] [Commented] (YARN-6746) SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code

2017-06-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16071017#comment-16071017
 ] 

Hadoop QA commented on YARN-6746:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6746 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875341/YARN-6746.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 35d6187cf9e6 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 147df30 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16289/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16289/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16289/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code
> ---
>
> Key: YARN-6746
>  

[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-06-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070998#comment-16070998
 ] 

Hadoop QA commented on YARN-2113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
33s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
15s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 46s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn-jdk1.8.0_131 with JDK v1.8.0_131 
generated 1 new + 59 unchanged - 1 fixed = 60 total (was 60) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 33s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 155 unchanged - 0 fixed = 164 total (was 155) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}202m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestAMAuthorization |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
\\
\\
|| Subsystem || Report/Notes ||
| 

[jira] [Commented] (YARN-6753) Expose more ContainerImpl states from NM in ContainerStateProto

2017-06-30 Thread Roni Burd (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070923#comment-16070923
 ] 

Roni Burd commented on YARN-6753:
-

ok, so I'm hearing the consensus to be add a version number, and hide the logic 
in the clients. Default values arae set via protobuf and clients are unaware of 
the change. 



> Expose more ContainerImpl states from NM in ContainerStateProto 
> 
>
> Key: YARN-6753
> URL: https://issues.apache.org/jira/browse/YARN-6753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Roni Burd
>Priority: Minor
>
> The current NM protobuf definition exposes a subset of the NM internal state 
> via ContainerStateProto.
> We are currently building tools that can use of more fine grain state like 
> LOCALIZING, LOCALIZED, EXIT_WITH_FAILURES etc.
> The proposal is to add more internal states in the API.
> I'm not sure if this is considered an Incompatible change or not



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-30 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070915#comment-16070915
 ] 

Hudson commented on YARN-5067:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11959 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11959/])
YARN-5067 Support specifying resources for AM containers in SLS. (Yufei 
(haibochen: rev 147df300bf00b5f4ed250426b6ccdd69085466da)
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/SLSRunner.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/AMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/appmaster/MRAMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/test/java/org/apache/hadoop/yarn/sls/appmaster/TestAMSimulator.java
* (edit) 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/conf/SLSConfiguration.java


> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-30 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070896#comment-16070896
 ] 

Naganarasimha G R commented on YARN-6749:
-

Thanks for the review [~bibinchundatt] and [~ebadger] and [~bibinchundatt] for 
the commit.

> TestAppSchedulingInfo.testPriorityAccounting fails consistently
> ---
>
> Key: YARN-6749
> URL: https://issues.apache.org/jira/browse/YARN-6749
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Eric Badger
>Assignee: Naganarasimha G R
> Fix For: 2.8.2
>
> Attachments: YARN-6749-branch-2.8.001.patch
>
>
> Broken by YARN-6467



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-30 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070889#comment-16070889
 ] 

Yufei Gu commented on YARN-5067:


Thanks [~haibo.chen] for the review and commit!

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-30 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070887#comment-16070887
 ] 

Haibo Chen commented on YARN-5067:
--

Thanks [~yufeigu] for your patch. I have committed it to trunk!

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-30 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070879#comment-16070879
 ] 

Haibo Chen commented on YARN-5067:
--

+1 WIll commit it shortly.

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6753) Expose more ContainerImpl states from NM in ContainerStateProto

2017-06-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070876#comment-16070876
 ] 

Jian He commented on YARN-6753:
---

Yep, I kinda prefer working on the existing class.  Since the container state 
is also used by RM for its own logic e.g RMNodeImpl#handleContainerStatus, 
similar kind of mapping needs to be done there also. 

On a related note, while working on YARN-1503, I also thought to expose the 
state of whether localization failed or succeeded.  But there I need a more 
sophisticated localization status per resource, though we probably also need a 
limit to limit the number resources object returned, otherwise, it could make 
the getContainerStatus heavy. 


> Expose more ContainerImpl states from NM in ContainerStateProto 
> 
>
> Key: YARN-6753
> URL: https://issues.apache.org/jira/browse/YARN-6753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Roni Burd
>Priority: Minor
>
> The current NM protobuf definition exposes a subset of the NM internal state 
> via ContainerStateProto.
> We are currently building tools that can use of more fine grain state like 
> LOCALIZING, LOCALIZED, EXIT_WITH_FAILURES etc.
> The proposal is to add more internal states in the API.
> I'm not sure if this is considered an Incompatible change or not



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6746) SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code

2017-06-30 Thread Deepti Sawhney (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepti Sawhney updated YARN-6746:
-
Attachment: YARN-6746.001.patch

Submitted the patch without the newline.
Also attaching here -->

On Fri, Jun 30, 2017 at 4:28 PM, Daniel Templeton (JIRA) 



> SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code
> ---
>
> Key: YARN-6746
> URL: https://issues.apache.org/jira/browse/YARN-6746
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Deepti Sawhney
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6746.001.patch, YARN-6746.001.patch, 
> YARN-6746.001.patch
>
>
> The function is unused.  It also appears to be broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6746) SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code

2017-06-30 Thread Deepti Sawhney (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepti Sawhney updated YARN-6746:
-
Attachment: YARN-6746.001.patch

attached new file without the extra line

> SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code
> ---
>
> Key: YARN-6746
> URL: https://issues.apache.org/jira/browse/YARN-6746
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Deepti Sawhney
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6746.001.patch, YARN-6746.001.patch
>
>
> The function is unused.  It also appears to be broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6746) SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code

2017-06-30 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070855#comment-16070855
 ] 

Daniel Templeton commented on YARN-6746:


Patch looks fine.  Would you mind removing the extra newline that you added?

Looks like Jenkins is asleep at the wheel.  I'll go kick it.

> SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code
> ---
>
> Key: YARN-6746
> URL: https://issues.apache.org/jira/browse/YARN-6746
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Deepti Sawhney
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6746.001.patch
>
>
> The function is unused.  It also appears to be broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6758) Add elapsed time for SLS metrics

2017-06-30 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-6758:
--

 Summary: Add elapsed time for SLS metrics
 Key: YARN-6758
 URL: https://issues.apache.org/jira/browse/YARN-6758
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: scheduler-load-simulator
Affects Versions: 3.0.0-alpha3, 2.8.1
Reporter: Yufei Gu


SLS output many useful metrics with timestamp. It is not easy to tell time 
elapsed for events. You need to do the math. It would be nice to output elapsed 
time as well as timestamp or replace timestamp. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-30 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5067:
---
Component/s: scheduler-load-simulator

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5065) Umbrella JIRA of SLS fixes / improvements

2017-06-30 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5065:
---
Component/s: scheduler-load-simulator

> Umbrella JIRA of SLS fixes / improvements
> -
>
> Key: YARN-5065
> URL: https://issues.apache.org/jira/browse/YARN-5065
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler-load-simulator
>Reporter: Wangda Tan
>
> Umbrella JIRA to track SLS (scheduler load simulator) fixes and improvements.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4161) Capacity Scheduler : Assign single or multiple containers per heart beat driven by configuration

2017-06-30 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated YARN-4161:
--
Attachment: YARN-4161.002.patch

A new diff based on previous comments:
(1) Rename the config fields to follow CS style;
(2) Align the checking with max-offswitch one.

> Capacity Scheduler : Assign single or multiple containers per heart beat 
> driven by configuration
> 
>
> Key: YARN-4161
> URL: https://issues.apache.org/jira/browse/YARN-4161
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: capacity scheduler
>Reporter: Mayank Bansal
>Assignee: Mayank Bansal
>  Labels: oct16-medium
> Attachments: YARN-4161.002.patch, YARN-4161.patch, YARN-4161.patch.1
>
>
> Capacity Scheduler right now schedules multiple containers per heart beat if 
> there are more resources available in the node.
> This approach works fine however in some cases its not distribute the load 
> across the cluster hence throughput of the cluster suffers. I am adding 
> feature to drive that using configuration by that we can control the number 
> of containers assigned per heart beat.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6678) Committer thread crashes with IllegalStateException in async-scheduling mode of CapacityScheduler

2017-06-30 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6678:
---
Attachment: (was: YARN-6678.004.patch)

> Committer thread crashes with IllegalStateException in async-scheduling mode 
> of CapacityScheduler
> -
>
> Key: YARN-6678
> URL: https://issues.apache.org/jira/browse/YARN-6678
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6678.001.patch, YARN-6678.002.patch, 
> YARN-6678.003.patch, YARN-6678.004.patch
>
>
> Error log:
> {noformat}
> java.lang.IllegalStateException: Trying to reserve container 
> container_e10_1495599791406_7129_01_001453 for application 
> appattempt_1495599791406_7129_01 when currently reserved container 
> container_e10_1495599791406_7123_01_001513 on node host: node0123:45454 
> #containers=40 available=... used=...
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode.reserveResource(FiCaSchedulerNode.java:81)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.reserve(FiCaSchedulerApp.java:1079)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:795)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2770)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$ResourceCommitterService.run(CapacityScheduler.java:546)
> {noformat}
> Reproduce this problem:
> 1. nm1 re-reserved app-1/container-X1 and generated reserve proposal-1
> 2. nm2 had enough resource for app-1, un-reserved app-1/container-X1 and 
> allocated app-1/container-X2
> 3. nm1 reserved app-2/container-Y
> 4. proposal-1 was accepted but throw IllegalStateException when applying
> Currently the check code for reserve proposal in FiCaSchedulerApp#accept as 
> follows:
> {code}
>   // Container reserved first time will be NEW, after the container
>   // accepted & confirmed, it will become RESERVED state
>   if (schedulerContainer.getRmContainer().getState()
>   == RMContainerState.RESERVED) {
> // Set reReservation == true
> reReservation = true;
>   } else {
> // When reserve a resource (state == NEW is for new container,
> // state == RUNNING is for increase container).
> // Just check if the node is not already reserved by someone
> if (schedulerContainer.getSchedulerNode().getReservedContainer()
> != null) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Try to reserve a container, but the node is "
> + "already reserved by another container="
> + schedulerContainer.getSchedulerNode()
> .getReservedContainer().getContainerId());
>   }
>   return false;
> }
>   }
> {code}
> The reserved container on the node of reserve proposal will be checked only 
> for first-reserve container.
> We should confirm that reserved container on this node is equal to re-reserve 
> container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6678) Committer thread crashes with IllegalStateException in async-scheduling mode of CapacityScheduler

2017-06-30 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6678:
---
Attachment: YARN-6678.004.patch

> Committer thread crashes with IllegalStateException in async-scheduling mode 
> of CapacityScheduler
> -
>
> Key: YARN-6678
> URL: https://issues.apache.org/jira/browse/YARN-6678
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6678.001.patch, YARN-6678.002.patch, 
> YARN-6678.003.patch, YARN-6678.004.patch
>
>
> Error log:
> {noformat}
> java.lang.IllegalStateException: Trying to reserve container 
> container_e10_1495599791406_7129_01_001453 for application 
> appattempt_1495599791406_7129_01 when currently reserved container 
> container_e10_1495599791406_7123_01_001513 on node host: node0123:45454 
> #containers=40 available=... used=...
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode.reserveResource(FiCaSchedulerNode.java:81)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.reserve(FiCaSchedulerApp.java:1079)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:795)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2770)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$ResourceCommitterService.run(CapacityScheduler.java:546)
> {noformat}
> Reproduce this problem:
> 1. nm1 re-reserved app-1/container-X1 and generated reserve proposal-1
> 2. nm2 had enough resource for app-1, un-reserved app-1/container-X1 and 
> allocated app-1/container-X2
> 3. nm1 reserved app-2/container-Y
> 4. proposal-1 was accepted but throw IllegalStateException when applying
> Currently the check code for reserve proposal in FiCaSchedulerApp#accept as 
> follows:
> {code}
>   // Container reserved first time will be NEW, after the container
>   // accepted & confirmed, it will become RESERVED state
>   if (schedulerContainer.getRmContainer().getState()
>   == RMContainerState.RESERVED) {
> // Set reReservation == true
> reReservation = true;
>   } else {
> // When reserve a resource (state == NEW is for new container,
> // state == RUNNING is for increase container).
> // Just check if the node is not already reserved by someone
> if (schedulerContainer.getSchedulerNode().getReservedContainer()
> != null) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Try to reserve a container, but the node is "
> + "already reserved by another container="
> + schedulerContainer.getSchedulerNode()
> .getReservedContainer().getContainerId());
>   }
>   return false;
> }
>   }
> {code}
> The reserved container on the node of reserve proposal will be checked only 
> for first-reserve container.
> We should confirm that reserved container on this node is equal to re-reserve 
> container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-06-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070675#comment-16070675
 ] 

Jian He edited comment on YARN-6756 at 6/30/17 8:36 PM:


I actually run into NPE when I use the builder API, if the caller doesn't 
explicitly call executionTypeRequest() method to set a dummy object, it always 
throws NPE, I think we need to initialize it with a default object 


was (Author: jianhe):
I actually run into NPE when I use the builder API, if I caller doesn't 
explicitly call executionTypeRequest() method to set a dummy object, it always 
throws NPE, I think we need to initialize it with a default object 

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Critical
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-06-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070675#comment-16070675
 ] 

Jian He edited comment on YARN-6756 at 6/30/17 8:36 PM:


I actually run into NPE when I use the builder API, if I caller doesn't 
explicitly call executionTypeRequest() method to set a dummy object, it always 
throws NPE, I think we need to initialize it with a default object 


was (Author: jianhe):
I actually run into NPE when I use the builder API, if I caller doesn't call 
executionTypeRequest() method to set a dummy object, it always throws NPE, I 
think we need to initialize it with a defualt object 

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Critical
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-06-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070675#comment-16070675
 ] 

Jian He commented on YARN-6756:
---

I actually run into NPE when I use the builder API, if I caller doesn't call 
executionTypeRequest() method to set a dummy object, it always throws NPE, I 
think we need to initialize it with a defualt object 

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Critical
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path

2017-06-30 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070635#comment-16070635
 ] 

Miklos Szegedi commented on YARN-6757:
--

See also the discussion in YARN-6515, why we need this.

> Refactor the usage of 
> yarn.nodemanager.linux-container-executor.cgroups.mount-path
> --
>
> Key: YARN-6757
> URL: https://issues.apache.org/jira/browse/YARN-6757
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-6757.000.patch
>
>
> We should add the ability to specify a custom cgroup path. This is how the 
> documentation of inux-container-executor.cgroups.mount-path would look like:
> {code}
> Requested cgroup mount path. Yarn has built in functionality to discover
> the system cgroup mount paths, so use this setting only, if the discovery 
> does not work.
> This path must exist before the NodeManager is launched.
> The location can vary depending on the Linux distribution in use.
> Common locations include /sys/fs/cgroup and /cgroup.
> If cgroups are not mounted, set 
> yarn.nodemanager.linux-container-executor.cgroups.mount
> to true. In this case it specifies, where the LCE should attempt to mount 
> cgroups if not found.
> If cgroups is accessible through lxcfs or some other file system,
> then set this path and 
> yarn.nodemanager.linux-container-executor.cgroups.mount to false.
> Yarn tries to use this path first, before any cgroup mount point 
> discovery.
> If it cannot find this directory, it falls back to searching for cgroup 
> mount points in the system.
> Only used when the LCE resources handler is set to the 
> CgroupsLCEResourcesHandler
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path

2017-06-30 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6757:
-
Attachment: YARN-6757.000.patch

> Refactor the usage of 
> yarn.nodemanager.linux-container-executor.cgroups.mount-path
> --
>
> Key: YARN-6757
> URL: https://issues.apache.org/jira/browse/YARN-6757
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
> Attachments: YARN-6757.000.patch
>
>
> We should add the ability to specify a custom cgroup path. This is how the 
> documentation of inux-container-executor.cgroups.mount-path would look like:
> {code}
> Requested cgroup mount path. Yarn has built in functionality to discover
> the system cgroup mount paths, so use this setting only, if the discovery 
> does not work.
> This path must exist before the NodeManager is launched.
> The location can vary depending on the Linux distribution in use.
> Common locations include /sys/fs/cgroup and /cgroup.
> If cgroups are not mounted, set 
> yarn.nodemanager.linux-container-executor.cgroups.mount
> to true. In this case it specifies, where the LCE should attempt to mount 
> cgroups if not found.
> If cgroups is accessible through lxcfs or some other file system,
> then set this path and 
> yarn.nodemanager.linux-container-executor.cgroups.mount to false.
> Yarn tries to use this path first, before any cgroup mount point 
> discovery.
> If it cannot find this directory, it falls back to searching for cgroup 
> mount points in the system.
> Only used when the LCE resources handler is set to the 
> CgroupsLCEResourcesHandler
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6757) Refactor the usage of yarn.nodemanager.linux-container-executor.cgroups.mount-path

2017-06-30 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-6757:
-
Summary: Refactor the usage of 
yarn.nodemanager.linux-container-executor.cgroups.mount-path  (was: Refactor 
the setting yarn.nodemanager.linux-container-executor.cgroups.mount-path)

> Refactor the usage of 
> yarn.nodemanager.linux-container-executor.cgroups.mount-path
> --
>
> Key: YARN-6757
> URL: https://issues.apache.org/jira/browse/YARN-6757
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Miklos Szegedi
>Assignee: Miklos Szegedi
>Priority: Minor
>
> We should add the ability to specify a custom cgroup path. This is how the 
> documentation of inux-container-executor.cgroups.mount-path would look like:
> {code}
> Requested cgroup mount path. Yarn has built in functionality to discover
> the system cgroup mount paths, so use this setting only, if the discovery 
> does not work.
> This path must exist before the NodeManager is launched.
> The location can vary depending on the Linux distribution in use.
> Common locations include /sys/fs/cgroup and /cgroup.
> If cgroups are not mounted, set 
> yarn.nodemanager.linux-container-executor.cgroups.mount
> to true. In this case it specifies, where the LCE should attempt to mount 
> cgroups if not found.
> If cgroups is accessible through lxcfs or some other file system,
> then set this path and 
> yarn.nodemanager.linux-container-executor.cgroups.mount to false.
> Yarn tries to use this path first, before any cgroup mount point 
> discovery.
> If it cannot find this directory, it falls back to searching for cgroup 
> mount points in the system.
> Only used when the LCE resources handler is set to the 
> CgroupsLCEResourcesHandler
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6757) Refactor the setting yarn.nodemanager.linux-container-executor.cgroups.mount-path

2017-06-30 Thread Miklos Szegedi (JIRA)
Miklos Szegedi created YARN-6757:


 Summary: Refactor the setting 
yarn.nodemanager.linux-container-executor.cgroups.mount-path
 Key: YARN-6757
 URL: https://issues.apache.org/jira/browse/YARN-6757
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.0.0-alpha4
Reporter: Miklos Szegedi
Assignee: Miklos Szegedi
Priority: Minor


We should add the ability to specify a custom cgroup path. This is how the 
documentation of inux-container-executor.cgroups.mount-path would look like:
{code}
Requested cgroup mount path. Yarn has built in functionality to discover
the system cgroup mount paths, so use this setting only, if the discovery 
does not work.

This path must exist before the NodeManager is launched.
The location can vary depending on the Linux distribution in use.
Common locations include /sys/fs/cgroup and /cgroup.

If cgroups are not mounted, set 
yarn.nodemanager.linux-container-executor.cgroups.mount
to true. In this case it specifies, where the LCE should attempt to mount 
cgroups if not found.

If cgroups is accessible through lxcfs or some other file system,
then set this path and 
yarn.nodemanager.linux-container-executor.cgroups.mount to false.
Yarn tries to use this path first, before any cgroup mount point discovery.
If it cannot find this directory, it falls back to searching for cgroup 
mount points in the system.
Only used when the LCE resources handler is set to the 
CgroupsLCEResourcesHandler
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-06-30 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070607#comment-16070607
 ] 

Arun Suresh commented on YARN-6756:
---

yup.. if the ExecutionTypeRequest is explicitly set to 'null' in the 
ContainerRequest constructor.
That ideally should not happen. Think we should probably either add it to the 
null check to the 'sanityCheck()' method and throw an Exception.. or we should 
handle it by resetting it to 'ExecutionTypeRequest.newInstance()' if the value 
is null in the ContainerRequest constructor.

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Critical
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6753) Expose more ContainerImpl states from NM in ContainerStateProto

2017-06-30 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070605#comment-16070605
 ] 

Jason Lowe commented on YARN-6753:
--

The approach I proposed also leverages protobufs, passing a new version field 
in the client request instead of a new detailed info field in the server 
response.  At a high level they're the same concept -- pass new information in 
a field that old code will ignore.

I personally think it's cleaner for users to extend the existing enum 
transparently to the user's code (i.e.: the client version shenanigans will be 
hidden by the yarn client layer), but it does complicate the server code since 
it has to translate the container state to the appropriate list of enums.  I 
don't feel super strongly about it either way.  If we create a separate 
detailed field, note we'll have the same dilemma if we later add a new 
container state.  Would we then have a "super detailed" state?  ;-)


> Expose more ContainerImpl states from NM in ContainerStateProto 
> 
>
> Key: YARN-6753
> URL: https://issues.apache.org/jira/browse/YARN-6753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Roni Burd
>Priority: Minor
>
> The current NM protobuf definition exposes a subset of the NM internal state 
> via ContainerStateProto.
> We are currently building tools that can use of more fine grain state like 
> LOCALIZING, LOCALIZED, EXIT_WITH_FAILURES etc.
> The proposal is to add more internal states in the API.
> I'm not sure if this is considered an Incompatible change or not



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-06-30 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070554#comment-16070554
 ] 

Jian He commented on YARN-6756:
---

HI [~asuresh] , can you help check this ?

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Critical
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-06-30 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-6756:
--
Priority: Critical  (was: Major)

> ContainerRequest#executionTypeRequest causes NPE
> 
>
> Key: YARN-6756
> URL: https://issues.apache.org/jira/browse/YARN-6756
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Critical
>
> ContainerRequest#executionTypeRequest is initialized as null, which could 
> cause below " execTypeReq.getExecutionType" unconditionally throw NPE.
> {code}
>   ResourceRequestInfo addResourceRequest(Long allocationRequestId,
>   Priority priority, String resourceName, ExecutionTypeRequest 
> execTypeReq,
>   Resource capability, T req, boolean relaxLocality,
>   String labelExpression) {
> ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
> execTypeReq.getExecutionType(), capability);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6756) ContainerRequest#executionTypeRequest causes NPE

2017-06-30 Thread Jian He (JIRA)
Jian He created YARN-6756:
-

 Summary: ContainerRequest#executionTypeRequest causes NPE
 Key: YARN-6756
 URL: https://issues.apache.org/jira/browse/YARN-6756
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He


ContainerRequest#executionTypeRequest is initialized as null, which could cause 
below " execTypeReq.getExecutionType" unconditionally throw NPE.
{code}
  ResourceRequestInfo addResourceRequest(Long allocationRequestId,
  Priority priority, String resourceName, ExecutionTypeRequest execTypeReq,
  Resource capability, T req, boolean relaxLocality,
  String labelExpression) {
ResourceRequestInfo resourceRequestInfo = get(priority, resourceName,
execTypeReq.getExecutionType(), capability);
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6755) MiniYarnCluster should stop all the apps when shutdown

2017-06-30 Thread Jian He (JIRA)
Jian He created YARN-6755:
-

 Summary: MiniYarnCluster should stop all the apps when shutdown
 Key: YARN-6755
 URL: https://issues.apache.org/jira/browse/YARN-6755
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Jian He


The MiniYarnCluster is not stopping all the apps when shutdown, which makes the 
AM process lingering around.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-30 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070504#comment-16070504
 ] 

Haibo Chen commented on YARN-5067:
--

That's weird. I'll double check and commit if I see no problem.

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-06-30 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne reopened YARN-2113:
--

Reopenning so can run branch-2 pre-commit.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha4
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.branch-2.0019.patch, YARN-2113.branch-2.0020.patch, 
> YARN-2113.branch-2.8.0019.patch, YARN-2113.branch-2.8.0020.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6753) Expose more ContainerImpl states from NM in ContainerStateProto

2017-06-30 Thread Roni Burd (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070427#comment-16070427
 ] 

Roni Burd edited comment on YARN-6753 at 6/30/17 5:13 PM:
--

[~jlowe] : So the thought is to add a version number to the request, call the 
existing one v1 and make it the default in protobuf, and if clients ask for v2, 
then pass the extra params.

Another way of doing it is to add this as "detailed state". So the old states 
remain as is, but if anyone is interested in the details it can query the state 
more. This takes advantage of protobufs instead of needing to add a new version 
field.

Any other thoughts?


was (Author: roniburd):
[~jlowe] : Makes sense. So the thought is to add a version number to the 
request, call the existing one v1 and make it the default in protobuf, and if 
clients ask for v2, then pass the extra params.

Any other concerns?

> Expose more ContainerImpl states from NM in ContainerStateProto 
> 
>
> Key: YARN-6753
> URL: https://issues.apache.org/jira/browse/YARN-6753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Roni Burd
>Priority: Minor
>
> The current NM protobuf definition exposes a subset of the NM internal state 
> via ContainerStateProto.
> We are currently building tools that can use of more fine grain state like 
> LOCALIZING, LOCALIZED, EXIT_WITH_FAILURES etc.
> The proposal is to add more internal states in the API.
> I'm not sure if this is considered an Incompatible change or not



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6753) Expose more ContainerImpl states from NM in ContainerStateProto

2017-06-30 Thread Roni Burd (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070427#comment-16070427
 ] 

Roni Burd commented on YARN-6753:
-

[~jlowe] : Makes sense. So the thought is to add a version number to the 
request, call the existing one v1 and make it the default in protobuf, and if 
clients ask for v2, then pass the extra params.

Any other concerns?

> Expose more ContainerImpl states from NM in ContainerStateProto 
> 
>
> Key: YARN-6753
> URL: https://issues.apache.org/jira/browse/YARN-6753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Roni Burd
>Priority: Minor
>
> The current NM protobuf definition exposes a subset of the NM internal state 
> via ContainerStateProto.
> We are currently building tools that can use of more fine grain state like 
> LOCALIZING, LOCALIZED, EXIT_WITH_FAILURES etc.
> The proposal is to add more internal states in the API.
> I'm not sure if this is considered an Incompatible change or not



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-30 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070421#comment-16070421
 ] 

Bibin A Chundatt commented on YARN-6749:


Thank you [~ebadger] for reporting and [~Naganarasimha] for patch.
Committed to  branch-2.8

> TestAppSchedulingInfo.testPriorityAccounting fails consistently
> ---
>
> Key: YARN-6749
> URL: https://issues.apache.org/jira/browse/YARN-6749
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Eric Badger
>Assignee: Naganarasimha G R
> Attachments: YARN-6749-branch-2.8.001.patch
>
>
> Broken by YARN-6467



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-06-30 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070420#comment-16070420
 ] 

Andrew Wang commented on YARN-2113:
---

Yea, I've sent out RC0 already, feel free to re-open :)

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha4
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.branch-2.0019.patch, YARN-2113.branch-2.0020.patch, 
> YARN-2113.branch-2.8.0019.patch, YARN-2113.branch-2.8.0020.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070378#comment-16070378
 ] 

Hadoop QA commented on YARN-6749:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
50s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 15s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_131 Failed junit tests | 
hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher |
|   | hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d946387 |
| JIRA Issue | YARN-6749 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875081/YARN-6749-branch-2.8.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4432e705b567 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8 / 2956c58 |
| Default Java | 1.7.0_131 |

[jira] [Commented] (YARN-5067) Support specifying resources for AM containers in SLS

2017-06-30 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070372#comment-16070372
 ] 

Yufei Gu commented on YARN-5067:


Thanks [~haibo.chen] for the review. I didn't change Logger in AMSimulator and 
MRAMSimulator in the patch. They are static. 

> Support specifying resources for AM containers in SLS
> -
>
> Key: YARN-5067
> URL: https://issues.apache.org/jira/browse/YARN-5067
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Yufei Gu
> Attachments: YARN-5067.001.patch, YARN-5067.002.patch, 
> YARN-5067.003.patch
>
>
> Now resource of application masters in SLS is hardcoded to mem=1024 vcores=1.
> We should be able to specify AM resources from trace input file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-30 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070353#comment-16070353
 ] 

Bibin A Chundatt commented on YARN-6749:


+1 will commit after Jenkins is finshed.

> TestAppSchedulingInfo.testPriorityAccounting fails consistently
> ---
>
> Key: YARN-6749
> URL: https://issues.apache.org/jira/browse/YARN-6749
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Eric Badger
>Assignee: Naganarasimha G R
> Attachments: YARN-6749-branch-2.8.001.patch
>
>
> Broken by YARN-6467



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6746) SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code

2017-06-30 Thread Deepti Sawhney (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepti Sawhney updated YARN-6746:
-
Attachment: YARN-6746.001.patch

attached patch file - YARN-6746.001.patch

> SchedulerUtils.checkResourceRequestMatchingNodePartition() is dead code
> ---
>
> Key: YARN-6746
> URL: https://issues.apache.org/jira/browse/YARN-6746
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: scheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Daniel Templeton
>Assignee: Deepti Sawhney
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6746.001.patch
>
>
> The function is unused.  It also appears to be broken.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6708) Nodemanager container crash after ext3 folder limit

2017-06-30 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6708:
---
Attachment: YARN-6708.006.patch

> Nodemanager container crash after ext3 folder limit
> ---
>
> Key: YARN-6708
> URL: https://issues.apache.org/jira/browse/YARN-6708
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-6708.001.patch, YARN-6708.002.patch, 
> YARN-6708.003.patch, YARN-6708.004.patch, YARN-6708.005.patch, 
> YARN-6708.006.patch
>
>
> Configure umask as *027* for nodemanager service user
> and {{yarn.nodemanager.local-cache.max-files-per-directory}} as {{40}}. After 
> 4  *private* dir localization next directory will be *0/14*
> Local Directory cache manager 
> {code}
> vm2:/opt/hadoop/release/data/nmlocal/usercache/mapred/filecache # l
> total 28
> drwx--x--- 7 mapred hadoop 4096 Jun 10 14:35 ./
> drwxr-s--- 4 mapred hadoop 4096 Jun 10 12:07 ../
> drwxr-x--- 3 mapred users  4096 Jun 10 14:36 0/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:15 10/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:22 11/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:27 12/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:31 13/
> {code}
> *drwxr-x---* 3 mapred users  4096 Jun 10 14:36 0/ is only *750*
> Nodemanager user will not be able check for localization path exists or not.
> {{LocalResourcesTrackerImpl}}
> {code}
> case REQUEST:
>   if (rsrc != null && (!isResourcePresent(rsrc))) {
> LOG.info("Resource " + rsrc.getLocalPath()
> + " is missing, localizing it again");
> removeResource(req);
> rsrc = null;
>   }
>   if (null == rsrc) {
> rsrc = new LocalizedResource(req, dispatcher);
> localrsrc.put(req, rsrc);
>   }
>   break;
> {code}
> *isResourcePresent* will always return false and same resource will be 
> localized to {{0}} to next unique number



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6749) TestAppSchedulingInfo.testPriorityAccounting fails consistently

2017-06-30 Thread Eric Badger (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070273#comment-16070273
 ] 

Eric Badger commented on YARN-6749:
---

+1 (non-binding) on the 2.8 patch. Not sure why Jenkins hasn't run yet. Might 
need someone to kick it

> TestAppSchedulingInfo.testPriorityAccounting fails consistently
> ---
>
> Key: YARN-6749
> URL: https://issues.apache.org/jira/browse/YARN-6749
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Eric Badger
>Assignee: Naganarasimha G R
> Attachments: YARN-6749-branch-2.8.001.patch
>
>
> Broken by YARN-6467



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6753) Expose more ContainerImpl states from NM in ContainerStateProto

2017-06-30 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070187#comment-16070187
 ] 

Jason Lowe commented on YARN-6753:
--

Technically adding LOCALIZING and LOCALIZED is going to break some clients.  
They currently only expect NEW, RUNNING, and COMPLETE.  As soon as we start 
returning LOCALIZING/LOCALIZED instead of RUNNING that can break clients who 
are not expecting the new value.

To be completely backwards compatible the request for container state needs to 
include client version information so we know which enumerations the client 
expects.  If it is an older client then the NM can map the new states to the 
older enums.  For example, if a client asks for container state and doesn't 
include any client version info then we know we need to map 
LOCALIZED/LOCALIZING to RUNNING and EXIT_WITH_FAILURES to COMPLETE.

> Expose more ContainerImpl states from NM in ContainerStateProto 
> 
>
> Key: YARN-6753
> URL: https://issues.apache.org/jira/browse/YARN-6753
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Roni Burd
>Priority: Minor
>
> The current NM protobuf definition exposes a subset of the NM internal state 
> via ContainerStateProto.
> We are currently building tools that can use of more fine grain state like 
> LOCALIZING, LOCALIZED, EXIT_WITH_FAILURES etc.
> The proposal is to add more internal states in the API.
> I'm not sure if this is considered an Incompatible change or not



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6678) Committer thread crashes with IllegalStateException in async-scheduling mode of CapacityScheduler

2017-06-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070137#comment-16070137
 ] 

Hadoop QA commented on YARN-6678:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 21s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 21 unchanged - 0 fixed = 23 total (was 21) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 44m 46s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6678 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875237/YARN-6678.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux b35be6cf30b0 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3be2659 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16286/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16286/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16286/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16286/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Committer thread crashes with IllegalStateException in async-scheduling mode 
> of CapacityScheduler
> 

[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-06-30 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070130#comment-16070130
 ] 

Eric Payne commented on YARN-2113:
--

bq. Resolving this for now so I can run releasenotes generation.
[~andrew.wang], are you done generating the release notes? May I please re open 
this so that the branch-2 pre-commit can also run (the branch-2.8 precommit 
ran, but not branch-2).

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha4
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.branch-2.0019.patch, YARN-2113.branch-2.0020.patch, 
> YARN-2113.branch-2.8.0019.patch, YARN-2113.branch-2.8.0020.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2113) Add cross-user preemption within CapacityScheduler's leaf-queue

2017-06-30 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070101#comment-16070101
 ] 

Eric Payne commented on YARN-2113:
--

bq. +1 on branch-2 and branch-2.7 patches.
Thanks [~sunilg]. I think you meant branch-2.8, not branch-2.7 ;-)

The failed tests worked for me in my local environment, so I assert that they 
are not related to this patch.

> Add cross-user preemption within CapacityScheduler's leaf-queue
> ---
>
> Key: YARN-2113
> URL: https://issues.apache.org/jira/browse/YARN-2113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Sunil G
> Fix For: 3.0.0-alpha4
>
> Attachments: IntraQueue Preemption-Impact Analysis.pdf, 
> TestNoIntraQueuePreemptionIfBelowUserLimitAndDifferentPrioritiesWithExtraUsers.txt,
>  YARN-2113.0001.patch, YARN-2113.0002.patch, YARN-2113.0003.patch, 
> YARN-2113.0004.patch, YARN-2113.0005.patch, YARN-2113.0006.patch, 
> YARN-2113.0007.patch, YARN-2113.0008.patch, YARN-2113.0009.patch, 
> YARN-2113.0010.patch, YARN-2113.0011.patch, YARN-2113.0012.patch, 
> YARN-2113.0013.patch, YARN-2113.0014.patch, YARN-2113.0015.patch, 
> YARN-2113.0016.patch, YARN-2113.0017.patch, YARN-2113.0018.patch, 
> YARN-2113.0019.patch, YARN-2113.apply.onto.0012.ericp.patch, 
> YARN-2113.branch-2.0019.patch, YARN-2113.branch-2.0020.patch, 
> YARN-2113.branch-2.8.0019.patch, YARN-2113.branch-2.8.0020.patch, YARN-2113 
> Intra-QueuePreemption Behavior.pdf, YARN-2113.v0.patch
>
>
> Preemption today only works across queues and moves around resources across 
> queues per demand and usage. We should also have user-level preemption within 
> a queue, to balance capacity across users in a predictable manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6708) Nodemanager container crash after ext3 folder limit

2017-06-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070033#comment-16070033
 ] 

Hadoop QA commented on YARN-6708:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 17s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 7 new + 55 unchanged - 0 fixed = 62 total (was 55) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
14s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 40s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6708 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875230/YARN-6708.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3da39720b96d 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3be2659 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16285/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16285/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16285/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16285/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Nodemanager container crash after ext3 folder limit
> ---
>
> Key: 

[jira] [Commented] (YARN-6708) Nodemanager container crash after ext3 folder limit

2017-06-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16070032#comment-16070032
 ] 

Hadoop QA commented on YARN-6708:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 7 new + 54 unchanged - 0 fixed = 61 total (was 54) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m  
8s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6708 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875230/YARN-6708.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dc368714cd74 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3be2659 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16284/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16284/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16284/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16284/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Nodemanager container crash after ext3 folder limit
> ---
>
> Key: 

[jira] [Updated] (YARN-6678) Committer thread crashes with IllegalStateException in async-scheduling mode of CapacityScheduler

2017-06-30 Thread Tao Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Yang updated YARN-6678:
---
Attachment: YARN-6678.004.patch

Thanks [~sunilg] for your time.  
As your mentioned, This new patch adds timeout for every where clause, adds 
nodeId for debug info, and calls MockRM#stop at last of new test case. 
TestCapacitySchedulerAsyncScheduling can be passed now.
Sorry to be late for updating this patch.

> Committer thread crashes with IllegalStateException in async-scheduling mode 
> of CapacityScheduler
> -
>
> Key: YARN-6678
> URL: https://issues.apache.org/jira/browse/YARN-6678
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha3
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-6678.001.patch, YARN-6678.002.patch, 
> YARN-6678.003.patch, YARN-6678.004.patch
>
>
> Error log:
> {noformat}
> java.lang.IllegalStateException: Trying to reserve container 
> container_e10_1495599791406_7129_01_001453 for application 
> appattempt_1495599791406_7129_01 when currently reserved container 
> container_e10_1495599791406_7123_01_001513 on node host: node0123:45454 
> #containers=40 available=... used=...
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerNode.reserveResource(FiCaSchedulerNode.java:81)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.reserve(FiCaSchedulerApp.java:1079)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.apply(FiCaSchedulerApp.java:795)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.tryCommit(CapacityScheduler.java:2770)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler$ResourceCommitterService.run(CapacityScheduler.java:546)
> {noformat}
> Reproduce this problem:
> 1. nm1 re-reserved app-1/container-X1 and generated reserve proposal-1
> 2. nm2 had enough resource for app-1, un-reserved app-1/container-X1 and 
> allocated app-1/container-X2
> 3. nm1 reserved app-2/container-Y
> 4. proposal-1 was accepted but throw IllegalStateException when applying
> Currently the check code for reserve proposal in FiCaSchedulerApp#accept as 
> follows:
> {code}
>   // Container reserved first time will be NEW, after the container
>   // accepted & confirmed, it will become RESERVED state
>   if (schedulerContainer.getRmContainer().getState()
>   == RMContainerState.RESERVED) {
> // Set reReservation == true
> reReservation = true;
>   } else {
> // When reserve a resource (state == NEW is for new container,
> // state == RUNNING is for increase container).
> // Just check if the node is not already reserved by someone
> if (schedulerContainer.getSchedulerNode().getReservedContainer()
> != null) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("Try to reserve a container, but the node is "
> + "already reserved by another container="
> + schedulerContainer.getSchedulerNode()
> .getReservedContainer().getContainerId());
>   }
>   return false;
> }
>   }
> {code}
> The reserved container on the node of reserve proposal will be checked only 
> for first-reserve container.
> We should confirm that reserved container on this node is equal to re-reserve 
> container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6708) Nodemanager container crash after ext3 folder limit

2017-06-30 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6708:
---
Attachment: YARN-6708.005.patch

Attaching patch after handling comments.

> Nodemanager container crash after ext3 folder limit
> ---
>
> Key: YARN-6708
> URL: https://issues.apache.org/jira/browse/YARN-6708
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-6708.001.patch, YARN-6708.002.patch, 
> YARN-6708.003.patch, YARN-6708.004.patch, YARN-6708.005.patch
>
>
> Configure umask as *027* for nodemanager service user
> and {{yarn.nodemanager.local-cache.max-files-per-directory}} as {{40}}. After 
> 4  *private* dir localization next directory will be *0/14*
> Local Directory cache manager 
> {code}
> vm2:/opt/hadoop/release/data/nmlocal/usercache/mapred/filecache # l
> total 28
> drwx--x--- 7 mapred hadoop 4096 Jun 10 14:35 ./
> drwxr-s--- 4 mapred hadoop 4096 Jun 10 12:07 ../
> drwxr-x--- 3 mapred users  4096 Jun 10 14:36 0/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:15 10/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:22 11/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:27 12/
> drwxr-xr-x 3 mapred users  4096 Jun 10 12:31 13/
> {code}
> *drwxr-x---* 3 mapred users  4096 Jun 10 14:36 0/ is only *750*
> Nodemanager user will not be able check for localization path exists or not.
> {{LocalResourcesTrackerImpl}}
> {code}
> case REQUEST:
>   if (rsrc != null && (!isResourcePresent(rsrc))) {
> LOG.info("Resource " + rsrc.getLocalPath()
> + " is missing, localizing it again");
> removeResource(req);
> rsrc = null;
>   }
>   if (null == rsrc) {
> rsrc = new LocalizedResource(req, dispatcher);
> localrsrc.put(req, rsrc);
>   }
>   break;
> {code}
> *isResourcePresent* will always return false and same resource will be 
> localized to {{0}} to next unique number



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6748) Expose scheduling policy for each queue in FairScheduler Web UI

2017-06-30 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-6748:

Summary: Expose scheduling policy for each queue in FairScheduler Web UI  
(was: Expose scheduling policy for each queue in FairScheduler)

> Expose scheduling policy for each queue in FairScheduler Web UI
> ---
>
> Key: YARN-6748
> URL: https://issues.apache.org/jira/browse/YARN-6748
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Akira Ajisaka
>
> YARN-5929 added queue scheduling policy to jmx, so it's good to add 
> scheduling policy to WebUI as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6748) Expose scheduling policy for each queue in FairScheduler

2017-06-30 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated YARN-6748:

Description: YARN-5929 added queue scheduling policy to jmx, so it's good 
to add scheduling policy to WebUI as well.  (was: The scheduling policy for 
FairScheduler cannot be obtained via CLI or WebUI, or metrics. Therefore we 
cannot recognize that the configuration is reflected.)

> Expose scheduling policy for each queue in FairScheduler
> 
>
> Key: YARN-6748
> URL: https://issues.apache.org/jira/browse/YARN-6748
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Akira Ajisaka
>
> YARN-5929 added queue scheduling policy to jmx, so it's good to add 
> scheduling policy to WebUI as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6748) Expose scheduling policy for each queue in FairScheduler

2017-06-30 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069884#comment-16069884
 ] 

Akira Ajisaka commented on YARN-6748:
-

Thanks [~yufeigu] for the information. I'll update the title and description.

> Expose scheduling policy for each queue in FairScheduler
> 
>
> Key: YARN-6748
> URL: https://issues.apache.org/jira/browse/YARN-6748
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler
>Reporter: Akira Ajisaka
>
> The scheduling policy for FairScheduler cannot be obtained via CLI or WebUI, 
> or metrics. Therefore we cannot recognize that the configuration is reflected.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6742) Minor mistakes in "The YARN Service Registry" docs

2017-06-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069702#comment-16069702
 ] 

Hadoop QA commented on YARN-6742:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6742 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12875198/YARN-6742-002.patch |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 8947f7eda523 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3be2659 |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16283/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Minor mistakes in "The YARN Service Registry" docs
> --
>
> Key: YARN-6742
> URL: https://issues.apache.org/jira/browse/YARN-6742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Attachments: YARN-6742-001.patch, YARN-6742-002.patch
>
>
> There are minor mistakes in The YARN Service Registry docs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6742) Minor mistakes in "The YARN Service Registry" docs

2017-06-30 Thread Yeliang Cang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069681#comment-16069681
 ] 

Yeliang Cang commented on YARN-6742:


Hi, [~shaneku...@gmail.com], I have submit a new patch. Please check it out, 
thank you!

> Minor mistakes in "The YARN Service Registry" docs
> --
>
> Key: YARN-6742
> URL: https://issues.apache.org/jira/browse/YARN-6742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Attachments: YARN-6742-001.patch, YARN-6742-002.patch
>
>
> There are minor mistakes in The YARN Service Registry docs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6742) Minor mistakes in "The YARN Service Registry" docs

2017-06-30 Thread Yeliang Cang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yeliang Cang updated YARN-6742:
---
Attachment: YARN-6742-002.patch

> Minor mistakes in "The YARN Service Registry" docs
> --
>
> Key: YARN-6742
> URL: https://issues.apache.org/jira/browse/YARN-6742
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha3
>Reporter: Yeliang Cang
>Assignee: Yeliang Cang
>Priority: Trivial
> Attachments: YARN-6742-001.patch, YARN-6742-002.patch
>
>
> There are minor mistakes in The YARN Service Registry docs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6720) Support updating FPGA related constraint node label after FPGA device re-configuration

2017-06-30 Thread Zhankun Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16069618#comment-16069618
 ] 

Zhankun Tang commented on YARN-6720:


[~wangda]. Maybe it's my fault. Although the reconfigure FPGA device procedure 
is fast, the downloading may takes a reasonable time which should be avoid. 
That's the key problem this JIRA wants to solve. 

> Support updating FPGA related constraint node label after FPGA device 
> re-configuration
> --
>
> Key: YARN-6720
> URL: https://issues.apache.org/jira/browse/YARN-6720
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
> Attachments: 
> Storing-and-Updating-extra-FPGA-resource-attributes-in-hdfs_v1.pdf
>
>
> In order to provide a global optimal scheduling for mutable FPGA resource, it 
> seems an easy and direct way to utilize constraint node labels(YARN-3409) 
> instead of extending the global scheduler(YARN-3926) to match both resource 
> count and attributes.
> The rough idea is that the AM sets the constraint node label expression to 
> request containers on the nodes whose FPGA devices has the matching IP, and 
> then NM resource handler update the node constraint label if there's FPGA 
> device re-configuration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org