[jira] [Updated] (YARN-6918) Remove acls after queue delete to avoid memory leak

2017-11-17 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-6918:
---
Target Version/s: 2.9.1, 3.0.1  (was: 2.9.1)

> Remove acls after queue delete to avoid memory leak
> ---
>
> Key: YARN-6918
> URL: https://issues.apache.org/jira/browse/YARN-6918
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6918.001.patch
>
>
> Acl for deleted queue need to removed from allAcls to avoid leak 
> (Priority,YarnAuthorizer)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7533) Documentation for absolute resource support in CS

2017-11-17 Thread Sunil G (JIRA)
Sunil G created YARN-7533:
-

 Summary: Documentation for absolute resource support in CS
 Key: YARN-7533
 URL: https://issues.apache.org/jira/browse/YARN-7533
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacity scheduler
Reporter: Sunil G
Assignee: Sunil G






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7532) NM rolling restart causing application exceed max attempt

2017-11-17 Thread Bill Liu (JIRA)
Bill Liu created YARN-7532:
--

 Summary: NM rolling restart causing application exceed max attempt 
 Key: YARN-7532
 URL: https://issues.apache.org/jira/browse/YARN-7532
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bill Liu


I set my application max attempt to 1, and when I do rolling restart the 
NodeManager on the machine  where the Application Master running.
I found the attempt number become 2 and the new Application Master launched on 
other machine.
It seems when recovering a Nodemanager's containers, it does not respect the 
max attempt.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7524) Remove unused FairSchedulerEventLog

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257883#comment-16257883
 ] 

Hadoop QA commented on YARN-7524:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 31s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 60 unchanged - 11 fixed = 61 total (was 71) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 20s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation
 |
| Timed out junit tests | 
org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA |
|   | 
org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7524 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898118/YARN-7524.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ddf791dde6f9 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0940e4f |
| maven | version: Apache Maven 3.3.9 |
| 

[jira] [Commented] (YARN-6128) Add support for AMRMProxy HA

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257868#comment-16257868
 ] 

Hudson commented on YARN-6128:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13255 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13255/])
YARN-6128. Add support for AMRMProxy HA. (Botong Huang via Subru). (subru: rev 
d5f66888b8d767ee6706fab9950c194a1bf26d32)
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/utils/FederationRegistryClient.java
* (add) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/federation/utils/TestFederationRegistryClient.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/AMRMClientUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/pom.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyApplicationContextImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyApplicationContext.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site/src/site/markdown/Federation.md
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/uam/TestUnmanagedApplicationManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/AMRMProxyService.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedApplicationManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/ContainerManagerImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/src/test/java/org/apache/hadoop/yarn/server/MiniYARNCluster.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/test/java/org/apache/hadoop/yarn/server/MockResourceManagerFacade.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/BaseAMRMProxyTest.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/yarn-default.xml
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/uam/UnmanagedAMPoolManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestFederationInterceptor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestableFederationInterceptor.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/TestAMRMProxyService.java


> Add support for AMRMProxy HA
> 
>
> Key: YARN-6128
> URL: https://issues.apache.org/jira/browse/YARN-6128
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: amrmproxy, nodemanager
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Attachments: YARN-6128.v0.patch, YARN-6128.v1.patch, 
> YARN-6128.v1.patch, YARN-6128.v10.patch, YARN-6128.v10.patch, 
> YARN-6128.v2.patch, YARN-6128.v3.patch, YARN-6128.v3.patch, 
> YARN-6128.v4.patch, YARN-6128.v5.patch, YARN-6128.v6.patch, 
> YARN-6128.v7.patch, YARN-6128.v8.patch, YARN-6128.v9.patch
>
>
> YARN-556 added the ability for RM failover without loosing any running 
> applications. In a Federated YARN environment, there's additional state in 
> the {{AMRMProxy}} to allow for spanning across multiple sub-clusters, so we 
> need to enhance {{AMRMProxy}} to support HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To 

[jira] [Commented] (YARN-6128) Add support for AMRMProxy HA

2017-11-17 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257858#comment-16257858
 ] 

Subru Krishnan commented on YARN-6128:
--

Thanks [~botong]. I have committed to trunk but couldn't cherry-pick cleanly to 
branch-2, can you please provide a branch-2 patch?

> Add support for AMRMProxy HA
> 
>
> Key: YARN-6128
> URL: https://issues.apache.org/jira/browse/YARN-6128
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: amrmproxy, nodemanager
>Reporter: Subru Krishnan
>Assignee: Botong Huang
> Attachments: YARN-6128.v0.patch, YARN-6128.v1.patch, 
> YARN-6128.v1.patch, YARN-6128.v10.patch, YARN-6128.v10.patch, 
> YARN-6128.v2.patch, YARN-6128.v3.patch, YARN-6128.v3.patch, 
> YARN-6128.v4.patch, YARN-6128.v5.patch, YARN-6128.v6.patch, 
> YARN-6128.v7.patch, YARN-6128.v8.patch, YARN-6128.v9.patch
>
>
> YARN-556 added the ability for RM failover without loosing any running 
> applications. In a Federated YARN environment, there's additional state in 
> the {{AMRMProxy}} to allow for spanning across multiple sub-clusters, so we 
> need to enhance {{AMRMProxy}} to support HA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7531) ResourceRequest.equal does not check ExecutionTypeRequest.enforceExecutionType()

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257846#comment-16257846
 ] 

Hadoop QA commented on YARN-7531:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m  
2s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 19s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 59m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7531 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898293/YARN-7531.prelim.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 75c44754e437 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0940e4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/18569/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18569/testReport/ |
| Max. process+thread count | 402 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 

[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues

2017-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257798#comment-16257798
 ] 

Wangda Tan commented on YARN-6124:
--

[~Zian Chen], thanks for updating the patch. In general I think the change 
looks correct. Could you please investigate failed tests? Some of them look 
related to your change.

And also, since this patch is not a preliminary patch, so you don't need to 
include "wip" in the future updates unless you think it is preliminary and just 
want to get some feedbacks.

> Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin 
> -refreshQueues
> -
>
> Key: YARN-6124
> URL: https://issues.apache.org/jira/browse/YARN-6124
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Zian Chen
> Attachments: YARN-6124.wip.1.patch, YARN-6124.wip.2.patch, 
> YARN-6124.wip.3.patch
>
>
> Now enabled / disable / update SchedulingEditPolicy config requires restart 
> RM. This is inconvenient when admin wants to make changes to 
> SchedulingEditPolicies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7531) ResourceRequest.equal does not check ExecutionTypeRequest.enforceExecutionType()

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257792#comment-16257792
 ] 

Hadoop QA commented on YARN-7531:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 26s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7531 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898293/YARN-7531.prelim.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 038814cf2e59 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0940e4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/18568/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18568/testReport/ |
| Max. process+thread count | 392 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 

[jira] [Commented] (YARN-7495) Improve robustness of the AggregatedLogDeletionService

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257781#comment-16257781
 ] 

Hadoop QA commented on YARN-7495:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m  
6s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 38m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
44s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 24m 
41s{color} | {color:red} branch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} hadoop-yarn-common in trunk failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 50s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: The patch generated 4 new + 
37 unchanged - 2 fixed = 41 total (was 39) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 24m 
35s{color} | {color:red} patch has errors when building and testing our client 
artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m  2s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | hadoop-yarn-common:1 |
|   | hadoop-yarn-common:1 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7495 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898282/YARN-7495.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux f258ae56dd7b 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0940e4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/18567/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18567/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
| findbugs | 

[jira] [Commented] (YARN-7004) Add configs cache to optimize refreshQueues performance for large scale of queues

2017-11-17 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257773#comment-16257773
 ] 

Jonathan Hung commented on YARN-7004:
-

Hi [~Tao Yang], regarding YARN-5734, as Wangda mentioned there is support for 
storing capacity scheduler configuration in non-file based storage, and using 
an API to make incremental configuration changes to specific key-value pairs. 
Currently there's support for leveldb and zookeeper. The leveldb implementation 
should solve this problem, e.g. if you only want to change one key/value pair 
then the configuration mutation operation only needs to persist this single 
change, and the mutation is also applied in-memory. For the zookeeper based 
approach though, it reads/deserializes the entire configuration, applies the 
change, and serializes/stores it. But this may still speed it up, depending on 
where the bottleneck is for the original file-based approach. We haven't tried 
it on such a large queue hierarchy though.

Anyway, depending on which backing store is suitable for your environment, I'd 
recommend seeing if this feature can fix the refreshQueues issue. There's some 
documentation on enabling this in the markdown files in YARN-7241.

> Add configs cache to optimize refreshQueues performance for large scale of 
> queues
> -
>
> Key: YARN-7004
> URL: https://issues.apache.org/jira/browse/YARN-7004
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-7004.001.patch
>
>
> We have requirements for large scale queues in our production environment to 
> serve for many projects. So we did some tests for more than 5000 queues and 
> found that refreshQueues process took more than 1 minute. The refreshQueues 
> process costs most of time on iterating over all configurations to get 
> accessible-node-labels and ordering-policy configs for every queue.  
> Loading queue configs from cache should be beneficial to reduce time costs 
> (optimized from 1 minutes to 3 seconds for 5000 queues in our test) when 
> initializing/reinitializing queues. So I propose to load queue configs into 
> cache in CapacityScheduler#initializeQueues and 
> CapacityScheduler#reinitializeQueues. If cache has not be loaded on other 
> scenes, such as in test cases, it still can get queue configs by iterating 
> over all configurations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6922) Findbugs warning in YARN NodeManager

2017-11-17 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned YARN-6922:
---

Assignee: Weiwei Yang  (was: Xiao Chen)

> Findbugs warning in YARN NodeManager
> 
>
> Key: YARN-6922
> URL: https://issues.apache.org/jira/browse/YARN-6922
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Xuan Gong
>Assignee: Weiwei Yang
>
> Several findbugs warning in YARN NodeManager package.
> {code}
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
>  is a mutable collection which should be package protected
> Bug type MS_MUTABLE_COLLECTION_PKGPROTECT (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
> At ContainerMetrics.java:[line 134]
> {code}
> {code}
>   
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.pendingResources
> At ContainerLocalizer.java:[line 357]
> {code}
> {code}
>   
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.recentlyStoppedContainers
> At NodeStatusUpdaterImpl.java:[line 719]
> {code}
> {code}
> Hard coded reference to an absolute pathname in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> Bug type DMI_HARDCODED_ABSOLUTE_FILENAME (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> File name /sys/fs/cgroup
> At DockerLinuxContainerRuntime.java:[line 490]
> {code}
> {code}
>   Useless object stored in variable removedNullContainers of method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Bug type UC_USELESS_OBJECT (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Value removedNullContainers
> Type java.util.HashSet
> At NodeStatusUpdaterImpl.java:[line 642]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6922) Findbugs warning in YARN NodeManager

2017-11-17 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen reassigned YARN-6922:
---

Assignee: Xiao Chen  (was: Weiwei Yang)

> Findbugs warning in YARN NodeManager
> 
>
> Key: YARN-6922
> URL: https://issues.apache.org/jira/browse/YARN-6922
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Xuan Gong
>Assignee: Xiao Chen
>
> Several findbugs warning in YARN NodeManager package.
> {code}
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
>  is a mutable collection which should be package protected
> Bug type MS_MUTABLE_COLLECTION_PKGPROTECT (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainerMetrics.usageMetrics
> At ContainerMetrics.java:[line 134]
> {code}
> {code}
>   
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.createStatus()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ContainerLocalizer.pendingResources
> At ContainerLocalizer.java:[line 357]
> {code}
> {code}
>   
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
>  makes inefficient use of keySet iterator instead of entrySet iterator
> Bug type WMI_WRONG_MAP_ITERATOR (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeVeryOldStoppedContainersFromCache()
> Field 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.recentlyStoppedContainers
> At NodeStatusUpdaterImpl.java:[line 719]
> {code}
> {code}
> Hard coded reference to an absolute pathname in 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> Bug type DMI_HARDCODED_ABSOLUTE_FILENAME (click for details) 
> In class 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime
> In method 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(ContainerRuntimeContext)
> File name /sys/fs/cgroup
> At DockerLinuxContainerRuntime.java:[line 490]
> {code}
> {code}
>   Useless object stored in variable removedNullContainers of method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Bug type UC_USELESS_OBJECT (click for details) 
> In class org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl
> In method 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.removeOrTrackCompletedContainersFromContext(List)
> Value removedNullContainers
> Type java.util.HashSet
> At NodeStatusUpdaterImpl.java:[line 642]
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257731#comment-16257731
 ] 

Hadoop QA commented on YARN-6124:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
0s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 28s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 10 new + 463 unchanged - 0 fixed = 473 total (was 463) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 57s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 61m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesHttpStaticUserPermissions
 |
|   | hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler |
|   | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicy
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerSurgicalPreemption
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
|
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebappAuthentication |
|   | 

[jira] [Commented] (YARN-7528) Resource types that use units need to be defined at RM level and NM level or when using small units you will overflow max

2017-11-17 Thread Grant Sohn (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257725#comment-16257725
 ] 

Grant Sohn commented on YARN-7528:
--

Resource Manager exception is:
{noformat}
java.lang.IllegalArgumentException: Converting 9223372036854775807 from '' to 
'm' will result in an overflow of Long
at 
org.apache.hadoop.yarn.util.UnitsConversionUtil.convert(UnitsConversionUtil.java:160)
at 
org.apache.hadoop.yarn.util.resource.DominantResourceCalculator.normalize(DominantResourceCalculator.java:444)
at 
org.apache.hadoop.yarn.util.resource.Resources.normalize(Resources.java:392)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.getNormalizedResource(SchedulerUtils.java:177)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.getNormalizedResource(FairScheduler.java:782)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.AbstractYarnScheduler.normalizeRequests(AbstractYarnScheduler.java:1137)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler.allocate(FairScheduler.java:828)
at 
org.apache.hadoop.yarn.server.resourcemanager.DefaultAMSProcessor.allocate(DefaultAMSProcessor.java:265)
at 
org.apache.hadoop.yarn.server.resourcemanager.AMSProcessingChain.allocate(AMSProcessingChain.java:92)
at 
org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:388)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
at 
org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
{noformat}

> Resource types that use units need to be defined at RM level and NM level or 
> when using small units you will overflow max
> -
>
> Key: YARN-7528
> URL: https://issues.apache.org/jira/browse/YARN-7528
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, resourcemanager
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Daniel Templeton
>
> When the unit is not defined in the RM, the LONG_MAX default will overflow in 
> the conversion step.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7528) Resource types that use units need to be defined at RM level and NM level or when using small units you will overflow max_allocation calculation

2017-11-17 Thread Grant Sohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Sohn updated YARN-7528:
-
Summary: Resource types that use units need to be defined at RM level and 
NM level or when using small units you will overflow max_allocation calculation 
 (was: Resource types that use units need to be defined at RM level and NM 
level or when using small units you will overflow max)

> Resource types that use units need to be defined at RM level and NM level or 
> when using small units you will overflow max_allocation calculation
> 
>
> Key: YARN-7528
> URL: https://issues.apache.org/jira/browse/YARN-7528
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, resourcemanager
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Daniel Templeton
>
> When the unit is not defined in the RM, the LONG_MAX default will overflow in 
> the conversion step.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7528) Resource types that use units need to be defined at RM level and NM level or when using small units you will overflow max

2017-11-17 Thread Grant Sohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Sohn updated YARN-7528:
-
Summary: Resource types that use units need to be defined at RM level and 
NM level or when using small units you will overflow max  (was: Need to 
document that resource types that use units need to be defined at RM level and 
NM level)

> Resource types that use units need to be defined at RM level and NM level or 
> when using small units you will overflow max
> -
>
> Key: YARN-7528
> URL: https://issues.apache.org/jira/browse/YARN-7528
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, resourcemanager
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Daniel Templeton
>
> When the unit is not defined in the RM, the LONG_MAX default will overflow in 
> the conversion step.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7473) Implement Framework and policy for capacity management of auto created queues

2017-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257722#comment-16257722
 ] 

Wangda Tan commented on YARN-7473:
--

Thanks [~suma.shivaprasad], comments: 

validateAndApplyQueueManagementChanges:
1) Should be part of ManagedParentQueue instead of CS.
2) Instead of using QueueEntitlement, can we use LeafQueueTemplate and 
reinitialize LeafQueue?

getInitialEntitlement
1) ordering looks wrong:
2) getInitialEntitlement should be getInitialQueueTemplate and merged to queue 
creation path.

commitQueueManagementChanges: 
1) Should it part of ManagedParentQueue?

AutoCreatedQueueManagementPolicy#addChildQueue/removeChildQueue:
1) Can we just let policy pull child queues info from parent queue reference? 

AutoCreatedQueueManagementPolicy#reinitialize not being used.

AutoCreatedQueueManagementPolicy#init, clock can be removed from parameter list.

> Implement Framework and policy for capacity management of auto created queues 
> --
>
> Key: YARN-7473
> URL: https://issues.apache.org/jira/browse/YARN-7473
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Suma Shivaprasad
>Assignee: Suma Shivaprasad
> Attachments: YARN-7473.1.patch, YARN-7473.2.patch
>
>
> This jira mainly addresses the following
>  
> 1.Support adding pluggable policies on parent queue for dynamically managing 
> capacity/state for leaf queues.
> 2. Implement  a default policy that manages capacity based on pending 
> applications and either grants guaranteed or zero capacity to queues based on 
> parent's available guaranteed capacity.
> 3. Integrate with SchedulingEditPolicy framework to trigger this periodically 
> and signal scheduler to take necessary actions for capacity/queue management.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7528) Need to document that resource types that use units need to be defined at RM level and NM level

2017-11-17 Thread Grant Sohn (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Sohn updated YARN-7528:
-
Component/s: resourcemanager

> Need to document that resource types that use units need to be defined at RM 
> level and NM level
> ---
>
> Key: YARN-7528
> URL: https://issues.apache.org/jira/browse/YARN-7528
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation, resourcemanager
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Daniel Templeton
>
> When the unit is not defined in the RM, the LONG_MAX default will overflow in 
> the conversion step.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7528) Need to document that resource types that use units need to be defined at RM level and NM level

2017-11-17 Thread Grant Sohn (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257713#comment-16257713
 ] 

Grant Sohn commented on YARN-7528:
--

This has the unpleasant side-effect of causing the job to hang.

> Need to document that resource types that use units need to be defined at RM 
> level and NM level
> ---
>
> Key: YARN-7528
> URL: https://issues.apache.org/jira/browse/YARN-7528
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Daniel Templeton
>
> When the unit is not defined in the RM, the LONG_MAX default will overflow in 
> the conversion step.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7531) ResourceRequest.equal does not check ExecutionTypeRequest.enforceExecutionType()

2017-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7531:
-
Priority: Major  (was: Critical)

> ResourceRequest.equal does not check 
> ExecutionTypeRequest.enforceExecutionType()
> 
>
> Key: YARN-7531
> URL: https://issues.apache.org/jira/browse/YARN-7531
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7531.prelim.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7523) Introduce description and version field in Service record

2017-11-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257695#comment-16257695
 ] 

Eric Yang commented on YARN-7523:
-

It would be nice if we create a wrapper object on Service record as a template. 
 YARN-7399 storage interface can match the design here.

> Introduce description and version field in Service record
> -
>
> Key: YARN-7523
> URL: https://issues.apache.org/jira/browse/YARN-7523
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
> Fix For: yarn-native-services
>
>
> YARN-7512 would need version field in Service record. It would be good to 
> introduce a description field also to allow service owners to capture some 
> details which can be used to display in Service catalog as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7531) ResourceRequest.equal does not check ExecutionTypeRequest.enforceExecutionType()

2017-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7531:
-
Attachment: YARN-7531.prelim.patch

Upload a preliminary patch to check what tests fails because of this fix.

> ResourceRequest.equal does not check 
> ExecutionTypeRequest.enforceExecutionType()
> 
>
> Key: YARN-7531
> URL: https://issues.apache.org/jira/browse/YARN-7531
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>Priority: Critical
> Attachments: YARN-7531.prelim.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7399) Yarn services metadata storage improvement

2017-11-17 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7399:

Attachment: YARN-7399.001.patch

First draft of the storage API.  Main features of this interface divided into 
two parts:

# Ability to store template form of service configuration.  The template can be 
instantiated by users.
# Ability to store service configuration for a deployed instance of service.

[~billie.rinaldi] [~gsaha] Please review if the interface covers all necessary 
operations.  Thanks

> Yarn services metadata storage improvement
> --
>
> Key: YARN-7399
> URL: https://issues.apache.org/jira/browse/YARN-7399
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn-native-services
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7399.001.patch, YARN-7399.png
>
>
> In Slider, metadata is stored in user's home directory. Slider command line 
> interface interacts with HDFS directly to list deployed applications and 
> invoke YARN API or HDFS API to provide information to user. This design works 
> for a single user manage his/her own applications. When this design has been 
> ported to Yarn services, it becomes apparent that this design is difficult to 
> list all deployed applications on Hadoop cluster for administrator to manage 
> applications. Resource Manager needs to crawl through every user's home 
> directory to compile metadata about deployed applications. This can trigger 
> high load on namenode to list hundreds or thousands of list directory calls 
> owned by different users. Hence, it might be best to centralize the metadata 
> storage to Solr or HBase to reduce number of IO calls to namenode for manage 
> applications.
> In Slider, one application is composed of metainfo, specifications in json, 
> and payload of zip file that contains application code and deployment code. 
> Both meta information, and zip file payload are stored in the same 
> application directory in HDFS. This works well for distributed applications 
> without central application manager that oversee all application.
> In the next generation of application management, we like to centralize 
> metainfo and specifications in json to a centralized storage managed by YARN 
> user, and keep the payload zip file in user's home directory or in docker 
> registry. This arrangement can provide a faster lookup for metainfo when we 
> list all deployed applications and services on YARN dashboard.
> When we centralize metainfo to YARN user, we also need to build ACL to 
> enforce who can manage applications, and make update. The current proposal is:
> yarn.admin.acl - list of groups that can submit/reconfigure/pause/kill all 
> applications
> normal users - submit/reconfigure/pause/kill his/her own applications



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7531) ResourceRequest.equal does not check ExecutionTypeRequest.enforceExecutionType()

2017-11-17 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-7531:


 Summary: ResourceRequest.equal does not check 
ExecutionTypeRequest.enforceExecutionType()
 Key: YARN-7531
 URL: https://issues.apache.org/jira/browse/YARN-7531
 Project: Hadoop YARN
  Issue Type: Bug
  Components: api
Affects Versions: 3.0.0
Reporter: Haibo Chen
Assignee: Haibo Chen
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7529) TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257680#comment-16257680
 ] 

Hadoop QA commented on YARN-7529:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 49s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
34s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 55s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7529 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898279/YARN-7529.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e2fce6c64a31 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0940e4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18566/testReport/ |
| Max. process+thread count | 621 (vs. ulimit of 5000) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18566/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Commented] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-11-17 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257629#comment-16257629
 ] 

Daniel Templeton commented on YARN-7119:


# ResourceUtils:L24, javadoc summary must end with a period.
# ResoursceUtils:L26, javadoc should be specific about the return value, i.e. 
array element 0 is...
# ResourceUtlis:L60, param summary should start with a lower case letter
# ResourceUtlis:L61, throws needs a summary
# RMAdminCLI:L207,L208, param summary should start with a lower case letter
# RMAdminCLI:L211,L212, throws needs a summary
# It looks like my suggestion about moving the + around (#10) was incomplete.  
I was reviewing the patch in the browser, so I didn't see that there are a 
bunch of other lines there with the + at the end.  Would you mind just making 
them all consistent (with the plus at the beginning).

> yarn rmadmin -updateNodeResource should be updated for resource types
> -
>
> Key: YARN-7119
> URL: https://issues.apache.org/jira/browse/YARN-7119
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
> Attachments: YARN-7119.001.patch, YARN-7119.002.patch, 
> YARN-7119.002.patch, YARN-7119.003.patch, YARN-7119.004.patch, 
> YARN-7119.004.patch, YARN-7119.005.patch, YARN-7119.006.patch, 
> YARN-7119.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7495) Improve robustness of the AggregatedLogDeletionService

2017-11-17 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-7495:
--
Attachment: YARN-7495.002.patch

> Improve robustness of the AggregatedLogDeletionService
> --
>
> Key: YARN-7495
> URL: https://issues.apache.org/jira/browse/YARN-7495
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: log-aggregation
>Reporter: Jonathan Eagles
>Assignee: Jonathan Eagles
> Attachments: YARN-7495.001.patch, YARN-7495.002.patch
>
>
> The deletion tasks are scheduled with a TimerTask whose scheduler is a Timer 
> scheduleAtFixedRate. If an exception occurs in the log deletion task, the 
> Timer scheduler interprets this as a task cancelation and stops scheduling 
> future deletion tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7529) TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently

2017-11-17 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257602#comment-16257602
 ] 

Chandni Singh edited comment on YARN-7529 at 11/17/17 9:29 PM:
---

Patch 2: Addressed checkstyle warning


was (Author: csingh):
Address checkstyle warning

> TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails 
> intermittently
> -
>
> Key: YARN-7529
> URL: https://issues.apache.org/jira/browse/YARN-7529
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7529.001.patch, YARN-7529.002.patch
>
>
> java.lang.AssertionError: component container affected by restart 
> expected:<{}> but was:<{compb=[container_1510781886708_0001_01_06, 
> container_1510781886708_0001_01_05], 
> compa=[container_1510781886708_0001_01_03, 
> container_1510781886708_0001_01_02]}>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.service.TestYarnNativeServices.testRecoverComponentsAfterRMRestart(TestYarnNativeServices.java:213)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7529) TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently

2017-11-17 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7529:

Attachment: YARN-7529.002.patch

Address checkstyle warning

> TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails 
> intermittently
> -
>
> Key: YARN-7529
> URL: https://issues.apache.org/jira/browse/YARN-7529
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7529.001.patch, YARN-7529.002.patch
>
>
> java.lang.AssertionError: component container affected by restart 
> expected:<{}> but was:<{compb=[container_1510781886708_0001_01_06, 
> container_1510781886708_0001_01_05], 
> compa=[container_1510781886708_0001_01_03, 
> container_1510781886708_0001_01_02]}>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.service.TestYarnNativeServices.testRecoverComponentsAfterRMRestart(TestYarnNativeServices.java:213)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7529) TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently

2017-11-17 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257598#comment-16257598
 ] 

Chandni Singh commented on YARN-7529:
-

[~billie.rinaldi] Can you please review?

> TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails 
> intermittently
> -
>
> Key: YARN-7529
> URL: https://issues.apache.org/jira/browse/YARN-7529
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7529.001.patch
>
>
> java.lang.AssertionError: component container affected by restart 
> expected:<{}> but was:<{compb=[container_1510781886708_0001_01_06, 
> container_1510781886708_0001_01_05], 
> compa=[container_1510781886708_0001_01_03, 
> container_1510781886708_0001_01_02]}>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.service.TestYarnNativeServices.testRecoverComponentsAfterRMRestart(TestYarnNativeServices.java:213)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7529) TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently

2017-11-17 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257596#comment-16257596
 ] 

Chandni Singh commented on YARN-7529:
-

The test seemed to be failing because it wasn't waiting for the containers to 
be ready. Patch 1 uses {{waitForAllCompToBeReady}} to retrieve all the ready 
containers which should fix the problem.

> TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails 
> intermittently
> -
>
> Key: YARN-7529
> URL: https://issues.apache.org/jira/browse/YARN-7529
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7529.001.patch
>
>
> java.lang.AssertionError: component container affected by restart 
> expected:<{}> but was:<{compb=[container_1510781886708_0001_01_06, 
> container_1510781886708_0001_01_05], 
> compa=[container_1510781886708_0001_01_03, 
> container_1510781886708_0001_01_02]}>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.service.TestYarnNativeServices.testRecoverComponentsAfterRMRestart(TestYarnNativeServices.java:213)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7530) hadoop-yarn-services-api should be part of hadoop-yarn-services

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257595#comment-16257595
 ] 

Hadoop QA commented on YARN-7530:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-7530 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7530 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898277/YARN-7530.001.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18564/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> hadoop-yarn-services-api should be part of hadoop-yarn-services
> ---
>
> Key: YARN-7530
> URL: https://issues.apache.org/jira/browse/YARN-7530
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Trivial
> Fix For: yarn-native-services
>
> Attachments: YARN-7530.001.patch
>
>
> Hadoop-yarn-services-api is currently a parallel project to 
> hadoop-yarn-services project.  It would be better if hadoop-yarn-services-api 
> is part of hadoop-yarn-services for correctness.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7530) hadoop-yarn-services-api should be part of hadoop-yarn-services

2017-11-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257593#comment-16257593
 ] 

Eric Yang edited comment on YARN-7530 at 11/17/17 9:23 PM:
---

This patch is same as: 
{code}
git mv hadoop-yarn-services-api hadoop-yarn-services
{code}

And modify pom.xml to match.


was (Author: eyang):
This patch is same as: 
{{code}}
git mv hadoop-yarn-services-api hadoop-yarn-services
{{code}}

And modify pom.xml to match.

> hadoop-yarn-services-api should be part of hadoop-yarn-services
> ---
>
> Key: YARN-7530
> URL: https://issues.apache.org/jira/browse/YARN-7530
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Priority: Trivial
> Fix For: yarn-native-services
>
> Attachments: YARN-7530.001.patch
>
>
> Hadoop-yarn-services-api is currently a parallel project to 
> hadoop-yarn-services project.  It would be better if hadoop-yarn-services-api 
> is part of hadoop-yarn-services for correctness.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7530) hadoop-yarn-services-api should be part of hadoop-yarn-services

2017-11-17 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang reassigned YARN-7530:
---

Assignee: Eric Yang

> hadoop-yarn-services-api should be part of hadoop-yarn-services
> ---
>
> Key: YARN-7530
> URL: https://issues.apache.org/jira/browse/YARN-7530
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Trivial
> Fix For: yarn-native-services
>
> Attachments: YARN-7530.001.patch
>
>
> Hadoop-yarn-services-api is currently a parallel project to 
> hadoop-yarn-services project.  It would be better if hadoop-yarn-services-api 
> is part of hadoop-yarn-services for correctness.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7530) hadoop-yarn-services-api should be part of hadoop-yarn-services

2017-11-17 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7530:

Attachment: YARN-7530.001.patch

This patch is same as: 
{{code}}
git mv hadoop-yarn-services-api hadoop-yarn-services
{{code}}

And modify pom.xml to match.

> hadoop-yarn-services-api should be part of hadoop-yarn-services
> ---
>
> Key: YARN-7530
> URL: https://issues.apache.org/jira/browse/YARN-7530
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Affects Versions: 3.1.0
>Reporter: Eric Yang
>Priority: Trivial
> Fix For: yarn-native-services
>
> Attachments: YARN-7530.001.patch
>
>
> Hadoop-yarn-services-api is currently a parallel project to 
> hadoop-yarn-services project.  It would be better if hadoop-yarn-services-api 
> is part of hadoop-yarn-services for correctness.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7530) hadoop-yarn-services-api should be part of hadoop-yarn-services

2017-11-17 Thread Eric Yang (JIRA)
Eric Yang created YARN-7530:
---

 Summary: hadoop-yarn-services-api should be part of 
hadoop-yarn-services
 Key: YARN-7530
 URL: https://issues.apache.org/jira/browse/YARN-7530
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn-native-services
Affects Versions: 3.1.0
Reporter: Eric Yang
Priority: Trivial


Hadoop-yarn-services-api is currently a parallel project to 
hadoop-yarn-services project.  It would be better if hadoop-yarn-services-api 
is part of hadoop-yarn-services for correctness.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7529) TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257537#comment-16257537
 ] 

Hadoop QA commented on YARN-7529:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 53s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core:
 The patch generated 1 new + 6 unchanged - 0 fixed = 7 total (was 6) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 31s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
32s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7529 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898257/YARN-7529.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 023faec1129d 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 0940e4f |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/18562/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/18562/testReport/ |
| Max. process+thread count | 622 (vs. ulimit of 5000) |
| modules | C: 

[jira] [Updated] (YARN-7337) Expose per-node over-allocation info in Node Report

2017-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-7337:
-
Attachment: YARN-7337-YARN-1011.04.patch

> Expose per-node over-allocation info in Node Report
> ---
>
> Key: YARN-7337
> URL: https://issues.apache.org/jira/browse/YARN-7337
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7337-YARN-1011.00.patch, 
> YARN-7337-YARN-1011.01.patch, YARN-7337-YARN-1011.02.patch, 
> YARN-7337-YARN-1011.03.patch, YARN-7337-YARN-1011.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7337) Expose per-node over-allocation info in Node Report

2017-11-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257532#comment-16257532
 ] 

Haibo Chen commented on YARN-7337:
--

The license warning and findbug issue are unrelated, so are the TestAMRMClient 
failures (I cannot reproduce them locally)
The TestYarnCLI tests try to match the exact output the NodeCLI, so I updated 
the string they try to match the output with.

> Expose per-node over-allocation info in Node Report
> ---
>
> Key: YARN-7337
> URL: https://issues.apache.org/jira/browse/YARN-7337
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7337-YARN-1011.00.patch, 
> YARN-7337-YARN-1011.01.patch, YARN-7337-YARN-1011.02.patch, 
> YARN-7337-YARN-1011.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7337) Expose per-node over-allocation info in Node Report

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257513#comment-16257513
 ] 

Hadoop QA commented on YARN-7337:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  9m 
26s{color} | {color:red} Docker failed to build yetus/hadoop:5b98639. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12897020/YARN-7337-YARN-1011.03.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18563/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Expose per-node over-allocation info in Node Report
> ---
>
> Key: YARN-7337
> URL: https://issues.apache.org/jira/browse/YARN-7337
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-7337-YARN-1011.00.patch, 
> YARN-7337-YARN-1011.01.patch, YARN-7337-YARN-1011.02.patch, 
> YARN-7337-YARN-1011.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues

2017-11-17 Thread Zian Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257486#comment-16257486
 ] 

Zian Chen commented on YARN-6124:
-

[~leftnoteasy] Thank you for your suggestions, I have modified the patch 
according to your suggestion and uploaded the YARN-6124.wip.3.patch , could you 
help me review it? Thanks!

> Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin 
> -refreshQueues
> -
>
> Key: YARN-6124
> URL: https://issues.apache.org/jira/browse/YARN-6124
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Zian Chen
> Attachments: YARN-6124.wip.1.patch, YARN-6124.wip.2.patch, 
> YARN-6124.wip.3.patch
>
>
> Now enabled / disable / update SchedulingEditPolicy config requires restart 
> RM. This is inconvenient when admin wants to make changes to 
> SchedulingEditPolicies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues

2017-11-17 Thread Zian Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zian Chen updated YARN-6124:

Attachment: YARN-6124.wip.3.patch

> Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin 
> -refreshQueues
> -
>
> Key: YARN-6124
> URL: https://issues.apache.org/jira/browse/YARN-6124
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Zian Chen
> Attachments: YARN-6124.wip.1.patch, YARN-6124.wip.2.patch, 
> YARN-6124.wip.3.patch
>
>
> Now enabled / disable / update SchedulingEditPolicy config requires restart 
> RM. This is inconvenient when admin wants to make changes to 
> SchedulingEditPolicies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7523) Introduce description and version field in Service record

2017-11-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257479#comment-16257479
 ] 

Eric Yang commented on YARN-7523:
-

[~gsaha] For catalog, can we consider these fields?

# Organization
# Application Name
# Description
# Icon
# Number of Likes
# Number of Downloads


> Introduce description and version field in Service record
> -
>
> Key: YARN-7523
> URL: https://issues.apache.org/jira/browse/YARN-7523
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
> Fix For: yarn-native-services
>
>
> YARN-7512 would need version field in Service record. It would be good to 
> introduce a description field also to allow service owners to capture some 
> details which can be used to display in Service catalog as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7529) TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently

2017-11-17 Thread Chandni Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chandni Singh updated YARN-7529:

Attachment: YARN-7529.001.patch

> TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails 
> intermittently
> -
>
> Key: YARN-7529
> URL: https://issues.apache.org/jira/browse/YARN-7529
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7529.001.patch
>
>
> java.lang.AssertionError: component container affected by restart 
> expected:<{}> but was:<{compb=[container_1510781886708_0001_01_06, 
> container_1510781886708_0001_01_05], 
> compa=[container_1510781886708_0001_01_03, 
> container_1510781886708_0001_01_02]}>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.service.TestYarnNativeServices.testRecoverComponentsAfterRMRestart(TestYarnNativeServices.java:213)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7529) TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails intermittently

2017-11-17 Thread Chandni Singh (JIRA)
Chandni Singh created YARN-7529:
---

 Summary: 
TestYarnNativeServices#testRecoverComponentsAfterRMRestart() fails 
intermittently
 Key: YARN-7529
 URL: https://issues.apache.org/jira/browse/YARN-7529
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Chandni Singh
Assignee: Chandni Singh


java.lang.AssertionError: component container affected by restart expected:<{}> 
but was:<{compb=[container_1510781886708_0001_01_06, 
container_1510781886708_0001_01_05], 
compa=[container_1510781886708_0001_01_03, 
container_1510781886708_0001_01_02]}>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at 
org.apache.hadoop.yarn.service.TestYarnNativeServices.testRecoverComponentsAfterRMRestart(TestYarnNativeServices.java:213)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6921) Allow resource request to opt out of oversubscription in Fair Scheduler

2017-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6921:
-
Attachment: YARN-6921-YARN-1011.00.patch

> Allow resource request to opt out of oversubscription in Fair Scheduler
> ---
>
> Key: YARN-6921
> URL: https://issues.apache.org/jira/browse/YARN-6921
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6921-YARN-1011.00.patch
>
>
> Guaranteed container requests, enforce tag true or not, are by default 
> eligible for oversubscription, and thus can get OPPORTUNISTIC container 
> allocations. We should allow them to opt out if their enforce tag is set to 
> true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7004) Add configs cache to optimize refreshQueues performance for large scale of queues

2017-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257454#comment-16257454
 ] 

Wangda Tan commented on YARN-7004:
--

[~Tao Yang], thanks for filing this ticket. I haven't reviewed much details of 
the implementation.

There're two parallel efforts which you might be interested: 
1) YARN-5734, this JIRA makes queue config stored inside storage instead of 
capacity-scheduler.xml. I'm not sure if this can solve your problem. cc: 
[~jhung].
2) It might be bad to maintain all queues inside capacity-scheduler.xml. I 
assume some queues (like per-user queue) can be auto created and managed by 
policies. We're working on YARN-7117 and targeted to 3.1.0 release (Feb 2018). 
Part of the code is already merged to trunk. Appreciate if you could share your 
use cases and feedbacks to the feature. cc: [~suma.shivaprasad]

> Add configs cache to optimize refreshQueues performance for large scale of 
> queues
> -
>
> Key: YARN-7004
> URL: https://issues.apache.org/jira/browse/YARN-7004
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-7004.001.patch
>
>
> We have requirements for large scale queues in our production environment to 
> serve for many projects. So we did some tests for more than 5000 queues and 
> found that refreshQueues process took more than 1 minute. The refreshQueues 
> process costs most of time on iterating over all configurations to get 
> accessible-node-labels and ordering-policy configs for every queue.  
> Loading queue configs from cache should be beneficial to reduce time costs 
> (optimized from 1 minutes to 3 seconds for 5000 queues in our test) when 
> initializing/reinitializing queues. So I propose to load queue configs into 
> cache in CapacityScheduler#initializeQueues and 
> CapacityScheduler#reinitializeQueues. If cache has not be loaded on other 
> scenes, such as in test cases, it still can get queue configs by iterating 
> over all configurations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned by the Resource Manager as a response to the Application Master heartbeat

2017-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257452#comment-16257452
 ] 

ASF GitHub Bot commented on YARN-6483:
--

Github user juanrh commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/289#discussion_r151770833
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
 ---
@@ -1160,6 +1160,11 @@ public void transition(RMNodeImpl rmNode, 
RMNodeEvent event) {
   // Update NM metrics during graceful decommissioning.
   rmNode.updateMetricsForGracefulDecommission(initState, finalState);
   rmNode.decommissioningTimeout = timeout;
+  // Notify NodesListManager to notify all RMApp so that each 
Application Master
+  // could take any required actions.
+  rmNode.context.getDispatcher().getEventHandler().handle(
+  new NodesListManagerEvent(
+  NodesListManagerEventType.NODE_USABLE, rmNode));
--- End diff --

I have just attached the patch. Thanks a lot for taking a look!


> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned by the Resource Manager as a response to the Application Master 
> heartbeat
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Juan Rodríguez Hortalá
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7513) FindBugs in FSAppAttempt.getWeight()

2017-11-17 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257451#comment-16257451
 ] 

Yufei Gu commented on YARN-7513:


By looking at the callers of {{getWeight()}}, they are in sorting schedulable 
and computing fair share, both require queue lock. The demand update also 
requires queue lock. In that sense, it is safe to remove the lock. Even for the 
new callers in the future, we mighty need lock of {{FSAppAttempt}} instead of 
the lock of {{FairScheduler}}.  
So, +1. 

> FindBugs in FSAppAttempt.getWeight()
> 
>
> Key: YARN-7513
> URL: https://issues.apache.org/jira/browse/YARN-7513
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: fairscheduler
>Affects Versions: 3.1.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Wilfred Spiegelenburg
>Priority: Minor
> Attachments: YARN-7513.001.patch
>
>
> With the change from YARN-7414 a new FindBugs warning was introduced.
> The code that was moved from the FairScheduler to the FSAppAttempt can also 
> be simplified by removing the unneeded locking.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned by the Resource Manager as a response to the Application Master heartbeat

2017-11-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Juan Rodríguez Hortalá updated YARN-6483:
-
Attachment: YARN-6483.002.patch

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned by the Resource Manager as a response to the Application Master 
> heartbeat
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Juan Rodríguez Hortalá
> Attachments: YARN-6483-v1.patch, YARN-6483.002.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7506) Overhaul the design of the Linux container-executor regarding Docker and future runtimes

2017-11-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257446#comment-16257446
 ] 

Eric Yang commented on YARN-7506:
-

[~miklos.szeg...@cloudera.com] We agree on trust and verify is the way to go.  
YARN-6623 is a good lesson, and there is a structure payload defined for YARN 
Java to talk to container executor.  It would be best if the data models are 
passed using JSON or protobuf to improve reliability of serialization errors.  
If the workflow is identified correctly, YARN java process and root program 
have a protocol approach to compute required validations.  This minimize the 
interactions between root and YARN.  Hadoop community can control the protocol 
with fine grind control.
There might be no need for root java process.  Thoughts?


> Overhaul the design of the Linux container-executor regarding Docker and 
> future runtimes
> 
>
> Key: YARN-7506
> URL: https://issues.apache.org/jira/browse/YARN-7506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Miklos Szegedi
>  Labels: Docker, container-executor
> Attachments: YARN-Docker control options.pdf
>
>
> I raise this topic to discuss a potential improvement of the container 
> executor tool in node manager.
> container-executor has two main purposes. It executes Linux *system calls not 
> available from Java*, and it executes tasks *available to root that are not 
> available to the yarn user*. Historically container-executor did both by 
> doing impersonation. The yarn user is separated from root because it runs 
> network services, so *the yarn user should be restricted* by design. Because 
> of this it has it's own config file container-executor.cfg writable by root 
> only that specifies what actions are allowed for the yarn user. However, the 
> requirements have changed with Docker and that raises the following questions:
> 1. The Docker feature of YARN requires root permissions to *access the Docker 
> socket* but it does not run any system calls, so could the Docker related 
> code in container-executor be *refactored into a separate Java process ran as 
> root*? Java would make the development much faster and more secure. 
> 2. The Docker feature only needs the Docker unix socket. It is not a good 
> idea to let the yarn user directly access the socket, since that would 
> elevate its privileges to root. However, the Java tool running as root 
> mentioned in the previous question could act as a *proxy on the Docker 
> socket* operating directly on the Docker REST API *eliminating the need to 
> use the Docker CLI*. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7525) Incorrect query parameters in cluster nodes REST API document

2017-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257443#comment-16257443
 ] 

Wangda Tan commented on YARN-7525:
--

Thanks [~Tao Yang], patch looks good. [~bibinchundatt], please proceed with 
patch commit whenever you think it is ready.

> Incorrect query parameters in cluster nodes REST API document
> -
>
> Key: YARN-7525
> URL: https://issues.apache.org/jira/browse/YARN-7525
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
>Priority: Minor
> Attachments: YARN-7525.001.patch, YARN-7525.002.patch
>
>
> Recently we use cluster nodes REST API and found the query parameters(state 
> and healthy) in document both are not exist.
> Now the query paramters in document is: 
> {noformat}
>   * state - the state of the node
>   * healthy - true or false 
> {noformat}
> The correct query parameters should be:
> {noformat}
>   * states - the states of the node, specified as a comma-separated list.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7527) Over-allocate node resource in async-scheduling mode of CapacityScheduler

2017-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257440#comment-16257440
 ] 

Wangda Tan commented on YARN-7527:
--

[~Tao Yang], fix looks good to me. And test failures are not related. Will 
commit the patch by end of today if no objections.

> Over-allocate node resource in async-scheduling mode of CapacityScheduler
> -
>
> Key: YARN-7527
> URL: https://issues.apache.org/jira/browse/YARN-7527
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0-alpha4, 2.9.1
>Reporter: Tao Yang
>Assignee: Tao Yang
> Attachments: YARN-7527.001.patch
>
>
> Currently in async-scheduling mode of CapacityScheduler, node resource may be 
> over-allocated since node resource check is ignored.
> {{FiCaSchedulerApp#commonCheckContainerAllocation}} will check whether this 
> node have enough available resource for this proposal and return check result 
> (ture/false), but this result is ignored in {{CapacityScheduler#accept}} as 
> below.
> {noformat}
> commonCheckContainerAllocation(allocation, schedulerContainer);
> {noformat}
> If {{FiCaSchedulerApp#commonCheckContainerAllocation}} returns false, 
> {{CapacityScheduler#accept}} should also return false as below:
> {noformat}
> if (!commonCheckContainerAllocation(allocation, schedulerContainer)) {
>   return false;
> }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7470) TestYarnNativeServices testRecoverComponentsAfterRMRestart fails due to BindingException

2017-11-17 Thread Chandni Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257427#comment-16257427
 ] 

Chandni Singh commented on YARN-7470:
-

I need to verify if using HATestUtil is necessary here. The test is not 
starting 2 RMs but restarting the same RM. 

> TestYarnNativeServices testRecoverComponentsAfterRMRestart fails due to 
> BindingException
> 
>
> Key: YARN-7470
> URL: https://issues.apache.org/jira/browse/YARN-7470
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7470.001.patch
>
>
>  FAILURE! - in org.apache.hadoop.yarn.service.TestYarnNativeServices
> testRecoverComponentsAfterRMRestart(org.apache.hadoop.yarn.service.TestYarnNativeServices)
>   Time elapsed: 3.404 sec  <<< ERROR!
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [10.22.8.166:52637] 
> java.net.BindException: Can't assign requested address; For more details see: 
>  http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:545)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:1035)
>   at org.apache.hadoop.ipc.Server.(Server.java:2807)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:960)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned by the Resource Manager as a response to the Application Master heartbeat

2017-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257426#comment-16257426
 ] 

ASF GitHub Bot commented on YARN-6483:
--

Github user xslogic commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/289#discussion_r151767367
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
 ---
@@ -1160,6 +1160,11 @@ public void transition(RMNodeImpl rmNode, 
RMNodeEvent event) {
   // Update NM metrics during graceful decommissioning.
   rmNode.updateMetricsForGracefulDecommission(initState, finalState);
   rmNode.decommissioningTimeout = timeout;
+  // Notify NodesListManager to notify all RMApp so that each 
Application Master
+  // could take any required actions.
+  rmNode.context.getDispatcher().getEventHandler().handle(
+  new NodesListManagerEvent(
+  NodesListManagerEventType.NODE_USABLE, rmNode));
--- End diff --

Thanks - looking at the patch.. can you also attach a consolidated patch on 
the JIRA ? So as to kick Jenkins.


> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned by the Resource Manager as a response to the Application Master 
> heartbeat
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Juan Rodríguez Hortalá
> Attachments: YARN-6483-v1.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned by the Resource Manager as a response to the Application Master heartbeat

2017-11-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257393#comment-16257393
 ] 

ASF GitHub Bot commented on YARN-6483:
--

Github user juanrh commented on a diff in the pull request:

https://github.com/apache/hadoop/pull/289#discussion_r151762680
  
--- Diff: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/rmnode/RMNodeImpl.java
 ---
@@ -1160,6 +1160,11 @@ public void transition(RMNodeImpl rmNode, 
RMNodeEvent event) {
   // Update NM metrics during graceful decommissioning.
   rmNode.updateMetricsForGracefulDecommission(initState, finalState);
   rmNode.decommissioningTimeout = timeout;
+  // Notify NodesListManager to notify all RMApp so that each 
Application Master
+  // could take any required actions.
+  rmNode.context.getDispatcher().getEventHandler().handle(
+  new NodesListManagerEvent(
+  NodesListManagerEventType.NODE_USABLE, rmNode));
--- End diff --

Makes sense, I have added a new commit for adding a new value 
`NODE_DECOMMISSIONING` for `NodesListManagerEventType`


> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned by the Resource Manager as a response to the Application Master 
> heartbeat
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Juan Rodríguez Hortalá
> Attachments: YARN-6483-v1.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5673) [Umbrella] Re-write container-executor to improve security, extensibility, and portability

2017-11-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257394#comment-16257394
 ] 

Miklos Szegedi commented on YARN-5673:
--

[~eyang], thank you for the explanation. Indeed, these might need root 
privileges.


> [Umbrella] Re-write container-executor to improve security, extensibility, 
> and portability
> --
>
> Key: YARN-5673
> URL: https://issues.apache.org/jira/browse/YARN-5673
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: container-executor Re-write Design Document.pdf
>
>
> As YARN adds support for new features that require administrator 
> privileges(such as support for network throttling and docker), we’ve had to 
> add new capabilities to the container-executor. This has led to a recognition 
> that the current container-executor security features as well as the code 
> could be improved. The current code is fragile and it’s hard to add new 
> features without causing regressions. Some of the improvements that need to 
> be made are -
> *Security*
> Currently the container-executor has limited security features. It relies 
> primarily on the permissions set on the binary but does little additional 
> security beyond that. There are few outstanding issues today -
> - No audit log
> - No way to disable features - network throttling and docker support are 
> built in and there’s no way to turn them off at a container-executor level
> - Code can be improved - a lot of the code switches users back and forth in 
> an arbitrary manner
> - No input validation - the paths, and files provided at invocation are not 
> validated or required to be in some specific location
> - No signing functionality - there is no way to enforce that the binary was 
> invoked by the NM and not by any other process
> *Code Issues*
> The code layout and implementation themselves can be improved. Some issues 
> there are -
> - No support for log levels - everything is logged and this can’t be turned 
> on or off
> - Extremely long set of invocation parameters(specifically during container 
> launch) which makes turning features on or off complicated
> - Poor test coverage - it’s easy to introduce regressions today due to the 
> lack of a proper test setup
> - Duplicate functionality - there is some amount of code duplication
> - Hard to make improvements or add new features due to the issues raised above
> *Portability*
>  - The container-executor mixes platform dependent APIs with platform 
> independent APIs making it hard to run it on multiple platforms. Allowing it 
> to run on multiple platforms also improves the overall code structure .
> One option is to improve the existing container-executor, however it might be 
> easier to start from scratch. That allows existing functionality to be 
> supported until we are ready to switch to the new code.
> This umbrella JIRA is to capture all the work required for the new code. I'm 
> going to work on a design doc for the changes - any suggestions or 
> improvements are welcome.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7274) Ability to disable elasticity at leaf queue level

2017-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257386#comment-16257386
 ] 

Wangda Tan commented on YARN-7274:
--

Thanks [~jlowe], 

bq. The only thing we should care about is absolute capacity <= absolute max 
capacity otherwise the absolute max capacity is no longer an absolute maximum. 
This is exactly what we plan to do, thanks!

> Ability to disable elasticity at leaf queue level
> -
>
> Key: YARN-7274
> URL: https://issues.apache.org/jira/browse/YARN-7274
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Scott Brokaw
>Assignee: Zian Chen
>
> The 
> [documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html]
>  defines yarn.scheduler.capacity..maximum-capacity as "Maximum 
> queue capacity in percentage (%) as a float. This limits the elasticity for 
> applications in the queue. Defaults to -1 which disables it."
> However, setting this value to -1 sets maximum capacity to 100% but I thought 
> (perhaps incorrectly) that the intention of the -1 setting is that it would 
> disable elasticity.  This is confirmed looking at the code:
> {code:java}
> public static final float MAXIMUM_CAPACITY_VALUE = 100;
> public static final float DEFAULT_MAXIMUM_CAPACITY_VALUE = -1.0f;
> ..
> maxCapacity = (maxCapacity == DEFAULT_MAXIMUM_CAPACITY_VALUE) ? 
> MAXIMUM_CAPACITY_VALUE : maxCapacity;
> {code}
> The sum of yarn.scheduler.capacity..capacity for all queues, at 
> each level, must be equal to 100 but for 
> yarn.scheduler.capacity..maximum-capacity this value is actually 
> a percentage of the entire cluster not just the parent queue.  Yet it can not 
> be set lower then the leaf queue's capacity setting. This seems to make it 
> impossible to disable elasticity at a leaf queue level.
> This improvement is proposing that YARN have the ability to have elasticity 
> disabled at a leaf queue level even if a parent queue permits elasticity by 
> having a yarn.scheduler.capacity..maximum-capacity greater then 
> it's yarn.scheduler.capacity..capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7496) CS Intra-queue preemption user-limit calculations are not in line with LeafQueue user-limit calculations

2017-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257384#comment-16257384
 ] 

Wangda Tan commented on YARN-7496:
--

[~eepayne], thanks for filing the JIRA and working on the fix, change makes 
sense to me. 

> CS Intra-queue preemption user-limit calculations are not in line with 
> LeafQueue user-limit calculations
> 
>
> Key: YARN-7496
> URL: https://issues.apache.org/jira/browse/YARN-7496
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.2
>Reporter: Eric Payne
>Assignee: Eric Payne
> Attachments: YARN-7496.001.branch-2.8.patch
>
>
> Only a problem in 2.8.
> Preemption could oscillate due to the difference in how user limit is 
> calculated between 2.8 and later releases.
> Basically (ignoring ULF, MULP, and maybe others), the calculation for user 
> limit on the Capacity Scheduler side in 2.8 is {{total used resources / 
> number of active users}} while the calculation in later releases is {{total 
> active resources / number of active users}}. When intra-queue preemption was 
> backported to 2.8, it's calculations for user limit were more aligned with 
> the latter algorithm, which is in 2.9 and later releases.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7506) Overhaul the design of the Linux container-executor regarding Docker and future runtimes

2017-11-17 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7506:
-
Issue Type: Sub-task  (was: Wish)
Parent: YARN-5673

> Overhaul the design of the Linux container-executor regarding Docker and 
> future runtimes
> 
>
> Key: YARN-7506
> URL: https://issues.apache.org/jira/browse/YARN-7506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Miklos Szegedi
>  Labels: Docker, container-executor
> Attachments: YARN-Docker control options.pdf
>
>
> I raise this topic to discuss a potential improvement of the container 
> executor tool in node manager.
> container-executor has two main purposes. It executes Linux *system calls not 
> available from Java*, and it executes tasks *available to root that are not 
> available to the yarn user*. Historically container-executor did both by 
> doing impersonation. The yarn user is separated from root because it runs 
> network services, so *the yarn user should be restricted* by design. Because 
> of this it has it's own config file container-executor.cfg writable by root 
> only that specifies what actions are allowed for the yarn user. However, the 
> requirements have changed with Docker and that raises the following questions:
> 1. The Docker feature of YARN requires root permissions to *access the Docker 
> socket* but it does not run any system calls, so could the Docker related 
> code in container-executor be *refactored into a separate Java process ran as 
> root*? Java would make the development much faster and more secure. 
> 2. The Docker feature only needs the Docker unix socket. It is not a good 
> idea to let the yarn user directly access the socket, since that would 
> elevate its privileges to root. However, the Java tool running as root 
> mentioned in the previous question could act as a *proxy on the Docker 
> socket* operating directly on the Docker REST API *eliminating the need to 
> use the Docker CLI*. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7506) Overhaul the design of the Linux container-executor regarding Docker and future runtimes

2017-11-17 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7506:
-
Attachment: (was: YARN-Docker control options.pdf)

> Overhaul the design of the Linux container-executor regarding Docker and 
> future runtimes
> 
>
> Key: YARN-7506
> URL: https://issues.apache.org/jira/browse/YARN-7506
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: nodemanager
>Reporter: Miklos Szegedi
>  Labels: Docker, container-executor
> Attachments: YARN-Docker control options.pdf
>
>
> I raise this topic to discuss a potential improvement of the container 
> executor tool in node manager.
> container-executor has two main purposes. It executes Linux *system calls not 
> available from Java*, and it executes tasks *available to root that are not 
> available to the yarn user*. Historically container-executor did both by 
> doing impersonation. The yarn user is separated from root because it runs 
> network services, so *the yarn user should be restricted* by design. Because 
> of this it has it's own config file container-executor.cfg writable by root 
> only that specifies what actions are allowed for the yarn user. However, the 
> requirements have changed with Docker and that raises the following questions:
> 1. The Docker feature of YARN requires root permissions to *access the Docker 
> socket* but it does not run any system calls, so could the Docker related 
> code in container-executor be *refactored into a separate Java process ran as 
> root*? Java would make the development much faster and more secure. 
> 2. The Docker feature only needs the Docker unix socket. It is not a good 
> idea to let the yarn user directly access the socket, since that would 
> elevate its privileges to root. However, the Java tool running as root 
> mentioned in the previous question could act as a *proxy on the Docker 
> socket* operating directly on the Docker REST API *eliminating the need to 
> use the Docker CLI*. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6483) Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes returned by the Resource Manager as a response to the Application Master heartbeat

2017-11-17 Thread Brad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257379#comment-16257379
 ] 

Brad commented on YARN-6483:


I think this would would be useful for another spark change I am working on, 
SPARK-21097. Looking forward to see how this turns out.

> Add nodes transitioning to DECOMMISSIONING state to the list of updated nodes 
> returned by the Resource Manager as a response to the Application Master 
> heartbeat
> 
>
> Key: YARN-6483
> URL: https://issues.apache.org/jira/browse/YARN-6483
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: resourcemanager
>Affects Versions: 2.8.0
>Reporter: Juan Rodríguez Hortalá
> Attachments: YARN-6483-v1.patch
>
>
> The DECOMMISSIONING node state is currently used as part of the graceful 
> decommissioning mechanism to give time for tasks to complete in a node that 
> is scheduled for decommission, and for reducer tasks to read the shuffle 
> blocks in that node. Also, YARN effectively blacklists nodes in 
> DECOMMISSIONING state by assigning them a capacity of 0, to prevent 
> additional containers to be launched in those nodes, so no more shuffle 
> blocks are written to the node. This blacklisting is not effective for 
> applications like Spark, because a Spark executor running in a YARN container 
> will keep receiving more tasks after the corresponding node has been 
> blacklisted at the YARN level. We would like to propose a modification of the 
> YARN heartbeat mechanism so nodes transitioning to DECOMMISSIONING are added 
> to the list of updated nodes returned by the Resource Manager as a response 
> to the Application Master heartbeat. This way a Spark application master 
> would be able to blacklist a DECOMMISSIONING at the Spark level.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7506) Overhaul the design of the Linux container-executor regarding Docker and future runtimes

2017-11-17 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7506:
-
Attachment: YARN-Docker control options.pdf

> Overhaul the design of the Linux container-executor regarding Docker and 
> future runtimes
> 
>
> Key: YARN-7506
> URL: https://issues.apache.org/jira/browse/YARN-7506
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: nodemanager
>Reporter: Miklos Szegedi
>  Labels: Docker, container-executor
> Attachments: YARN-Docker control options.pdf
>
>
> I raise this topic to discuss a potential improvement of the container 
> executor tool in node manager.
> container-executor has two main purposes. It executes Linux *system calls not 
> available from Java*, and it executes tasks *available to root that are not 
> available to the yarn user*. Historically container-executor did both by 
> doing impersonation. The yarn user is separated from root because it runs 
> network services, so *the yarn user should be restricted* by design. Because 
> of this it has it's own config file container-executor.cfg writable by root 
> only that specifies what actions are allowed for the yarn user. However, the 
> requirements have changed with Docker and that raises the following questions:
> 1. The Docker feature of YARN requires root permissions to *access the Docker 
> socket* but it does not run any system calls, so could the Docker related 
> code in container-executor be *refactored into a separate Java process ran as 
> root*? Java would make the development much faster and more secure. 
> 2. The Docker feature only needs the Docker unix socket. It is not a good 
> idea to let the yarn user directly access the socket, since that would 
> elevate its privileges to root. However, the Java tool running as root 
> mentioned in the previous question could act as a *proxy on the Docker 
> socket* operating directly on the Docker REST API *eliminating the need to 
> use the Docker CLI*. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7528) Need to document that resource types that use units need to be defined at RM level and NM level

2017-11-17 Thread Grant Sohn (JIRA)
Grant Sohn created YARN-7528:


 Summary: Need to document that resource types that use units need 
to be defined at RM level and NM level
 Key: YARN-7528
 URL: https://issues.apache.org/jira/browse/YARN-7528
 Project: Hadoop YARN
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Grant Sohn
Assignee: Daniel Templeton


When the unit is not defined in the RM, the LONG_MAX default will overflow in 
the conversion step.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7506) Overhaul the design of the Linux container-executor regarding Docker and future runtimes

2017-11-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257375#comment-16257375
 ] 

Miklos Szegedi commented on YARN-7506:
--

Thank you for the comments.
[~ebadger], the main reason a Java root process is more secure than 
container-executor is that it protects against exploitable buffer overflows. 
This is why I raised the suggestion. I was not sure why this approach was not 
followed before, this is why I raised this jira. It is also easier to use for 
most Hadoop developers, as you mentioned.
[~vinodkv], this jira already builds on the experiences of YARN-6623, I would 
rather consider it as a subtask of YARN-5673, if even considered. Now that you 
mentioned, a possible solution for YARN-5673 considering this (YARN-7506) 
suggestion would be to have a root Java based container executor framework that 
loads Java or native C modules. However, Docker has its own unique design with 
the CLI and the socket and no native system call dependencies, that it could be 
handled separately.
bq. Side note: One more important consideration in the container-executor 
design was to not have long running root processes as it may increase the 
attack scope. Assuming that is still intact.
Suggestion 1. above does not require any long running root user process. 2. 
does, however the only surface would be the proxied docker socket and config 
file that is protected with file system permissions just like the 
container-executor executable.
[~eyang]
bq. Both docker and hadoop use "trusted" users...
I have to remind about the rule of defense in depth. In case of defense in 
depth, there is no trusted user. Every input is evil and each component 
(container-executor in this case) has to do its proper error checking.
bq. YARN user tap directly into docker.sock goes against our original 
philosophy of having both "trusted" user and root to perform validation.
Indeed. I agree.
bq. Root power may be used for validation logic when trusted user can not 
validate, such as symlink to local file system access that YARN-6623 solved.
Indeed, and I would mention volume white and blacklists, that the yarn user 
cannot validate because of the defense in depth rule.
bq. We can consider to keep most of logic in Java as long as root privileges is 
not required.
I disagree here. Most of the functionality that YARN-6623 implemented requires 
that root does the validation, so if done in Java, it should be in a Java root 
process.
bq. The performance gain from tapping into docker socket is saving the cost of 
one fork but we would lose a lot of validations done by docker CLI.
The validations are important indeed, but making validations is much more 
difficult on command line options than on easily parseable JSON as the recent 
issues showed.
bq. If it can be helped, calling root cli is preferred than calling root owned 
network socket.
There is a solution for that. We could still use the CLI from Java node manager 
running as yarn on a unix socket writable to yarn that is proxied and security 
filtered with a root java process running in the background and that works on 
the original socket. (See attached diagram)
bq. I don't fully agree with YARN-5673 modules API design. The description is 
another plug-in architecture to enable more functionality with root power. I 
think this is a slippy slope to enable more risks in container-executor.
I agree, I also raised my concerns there.
bq. It is best to avoid running java as root. Java runtime includes a lot of 
third party code, which can be unpredictable with root power.
That is a risk. I would minimize the number of non-JDK dependencies, if java 
root process is chosen. I still think it may be more favorable in this case.

I summarized the options in the attached diagram. That shows which one is the 
most simple.

> Overhaul the design of the Linux container-executor regarding Docker and 
> future runtimes
> 
>
> Key: YARN-7506
> URL: https://issues.apache.org/jira/browse/YARN-7506
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: nodemanager
>Reporter: Miklos Szegedi
>  Labels: Docker, container-executor
> Attachments: YARN-Docker control options.pdf
>
>
> I raise this topic to discuss a potential improvement of the container 
> executor tool in node manager.
> container-executor has two main purposes. It executes Linux *system calls not 
> available from Java*, and it executes tasks *available to root that are not 
> available to the yarn user*. Historically container-executor did both by 
> doing impersonation. The yarn user is separated from root because it runs 
> network services, so *the yarn user should be restricted* by design. Because 
> of this it has it's own config file container-executor.cfg 

[jira] [Updated] (YARN-7506) Overhaul the design of the Linux container-executor regarding Docker and future runtimes

2017-11-17 Thread Miklos Szegedi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Szegedi updated YARN-7506:
-
Attachment: YARN-Docker control options.pdf

> Overhaul the design of the Linux container-executor regarding Docker and 
> future runtimes
> 
>
> Key: YARN-7506
> URL: https://issues.apache.org/jira/browse/YARN-7506
> Project: Hadoop YARN
>  Issue Type: Wish
>  Components: nodemanager
>Reporter: Miklos Szegedi
>  Labels: Docker, container-executor
> Attachments: YARN-Docker control options.pdf
>
>
> I raise this topic to discuss a potential improvement of the container 
> executor tool in node manager.
> container-executor has two main purposes. It executes Linux *system calls not 
> available from Java*, and it executes tasks *available to root that are not 
> available to the yarn user*. Historically container-executor did both by 
> doing impersonation. The yarn user is separated from root because it runs 
> network services, so *the yarn user should be restricted* by design. Because 
> of this it has it's own config file container-executor.cfg writable by root 
> only that specifies what actions are allowed for the yarn user. However, the 
> requirements have changed with Docker and that raises the following questions:
> 1. The Docker feature of YARN requires root permissions to *access the Docker 
> socket* but it does not run any system calls, so could the Docker related 
> code in container-executor be *refactored into a separate Java process ran as 
> root*? Java would make the development much faster and more secure. 
> 2. The Docker feature only needs the Docker unix socket. It is not a good 
> idea to let the yarn user directly access the socket, since that would 
> elevate its privileges to root. However, the Java tool running as root 
> mentioned in the previous question could act as a *proxy on the Docker 
> socket* operating directly on the Docker REST API *eliminating the need to 
> use the Docker CLI*. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257366#comment-16257366
 ] 

Arun Suresh commented on YARN-7448:
---

Thanks for the reviews [~kkaranasos] and [~leftnoteasy] and for the patch 
[~pg1...@imperial.ac.uk]
Committing this shortly..

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch, YARN-7448-YARN-6592.009.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6124) Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin -refreshQueues

2017-11-17 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257360#comment-16257360
 ] 

Wangda Tan commented on YARN-6124:
--

[~Zian Chen], thanks for investigating this, I personally prefer to reload 
yarn-site.xml when do refreshQueues and pass to scheduler (just like refresh 
max capacity, etc.). But we should not update internal configuration of 
AdminService. 

> Make SchedulingEditPolicy can be enabled / disabled / updated with RMAdmin 
> -refreshQueues
> -
>
> Key: YARN-6124
> URL: https://issues.apache.org/jira/browse/YARN-6124
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Wangda Tan
>Assignee: Zian Chen
> Attachments: YARN-6124.wip.1.patch, YARN-6124.wip.2.patch
>
>
> Now enabled / disable / update SchedulingEditPolicy config requires restart 
> RM. This is inconvenient when admin wants to make changes to 
> SchedulingEditPolicies.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7430) Enable user re-mapping for Docker containers by default

2017-11-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257348#comment-16257348
 ] 

Eric Yang commented on YARN-7430:
-

Thank you [~vvasudev].

> Enable user re-mapping for Docker containers by default
> ---
>
> Key: YARN-7430
> URL: https://issues.apache.org/jira/browse/YARN-7430
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: security, yarn
>Affects Versions: 2.9.0, 3.0.0
>Reporter: Eric Yang
>Assignee: Eric Yang
>Priority: Blocker
> Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1
>
> Attachments: YARN-7430.001.patch, YARN-7430.png
>
>
> In YARN-4266, the recommendation was to use -u [uid]:[gid] numeric values to 
> enforce user and group for the running user.  In YARN-6623, this translated 
> to --user=test --group-add=group1.  The code no longer enforce group 
> correctly for launched process.  
> In addition, the implementation in YARN-6623 requires the user and group 
> information to exist in container to translate username and group to uid/gid. 
>  For users on LDAP, there is no good way to populate container with user and 
> group information. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5673) [Umbrella] Re-write container-executor to improve security, extensibility, and portability

2017-11-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257315#comment-16257315
 ] 

Eric Yang edited comment on YARN-5673 at 11/17/17 6:11 PM:
---

Hi [~miklos.szeg...@cloudera.com] 
4) For clean up container, it *might* require root privileges.  For example, if 
a docker container is stuck, and we need to clean up forcefully using {{docker 
rm-f}} as root user or a container runs multi-user code and generate local temp 
files that can only be cleaned up using privileged user access.  This might be 
the preferred method for clean up.

5) Some Linux (i.e. SuSE) prevents user to identify which PID is attached to 
which port unless the checking is using root privileges.  It is also possible a 
PID file is written and readable by root user only.  This is the reason that we 
*might* want to give root access to check container process tree and ports 
allocation.

{quote}
I absolutely agree. However, the tool already does this, does not it?
{quote}

I am inclined to agree that rewrite is not required.  Enhancement and bug fix 
might be good enough to make this better.  The current messy part is the 
modularization is based on container type instead of workflow.  There were some 
logic that could stay in Java land which were converted to C code.  For 
example, the construction of docker command can be done in Java.  We can pass a 
list of item to validate, and container-executor report the summary to allow 
rest of the logic to be computed in Java, then pass to next workflow with Java 
constructed commands.  This might be one way to make this better without major 
rewrite.


was (Author: eyang):
Hi [~miklos.szeg...@cloudera.com] 
4) For clean up container, it *might* require root privileges.  For example, if 
a docker container is stuck, and we need to clean up forcefully using {{docker 
rm-f}} as root user or a container runs multi-user code and generate local temp 
files that can only be cleaned up using privileged user access.  This might be 
the preferred method for clean up.

5) Some Linux (i.e. SuSE) prevents user to identify which PID is attached to 
which port unless the checking is using root privileges.  It is also possible a 
PID file is written and readable by root user only.  This is the reason that we 
*might* want to give root access to check container process tree and ports 
allocation.

{quote}
I absolutely agree. However, the tool already does this, does not it?
{quote}

Agree, rewrite is not required.  Enhancement and bug fix might be good enough 
to make this better.

> [Umbrella] Re-write container-executor to improve security, extensibility, 
> and portability
> --
>
> Key: YARN-5673
> URL: https://issues.apache.org/jira/browse/YARN-5673
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: container-executor Re-write Design Document.pdf
>
>
> As YARN adds support for new features that require administrator 
> privileges(such as support for network throttling and docker), we’ve had to 
> add new capabilities to the container-executor. This has led to a recognition 
> that the current container-executor security features as well as the code 
> could be improved. The current code is fragile and it’s hard to add new 
> features without causing regressions. Some of the improvements that need to 
> be made are -
> *Security*
> Currently the container-executor has limited security features. It relies 
> primarily on the permissions set on the binary but does little additional 
> security beyond that. There are few outstanding issues today -
> - No audit log
> - No way to disable features - network throttling and docker support are 
> built in and there’s no way to turn them off at a container-executor level
> - Code can be improved - a lot of the code switches users back and forth in 
> an arbitrary manner
> - No input validation - the paths, and files provided at invocation are not 
> validated or required to be in some specific location
> - No signing functionality - there is no way to enforce that the binary was 
> invoked by the NM and not by any other process
> *Code Issues*
> The code layout and implementation themselves can be improved. Some issues 
> there are -
> - No support for log levels - everything is logged and this can’t be turned 
> on or off
> - Extremely long set of invocation parameters(specifically during container 
> launch) which makes turning features on or off complicated
> - Poor test coverage - it’s easy to introduce regressions today due to the 
> lack of a proper test setup
> - Duplicate functionality - there is some amount of code duplication
> - Hard to make improvements or add 

[jira] [Commented] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257327#comment-16257327
 ] 

Hadoop QA commented on YARN-7448:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 3s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
8s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 11s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
51s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7448 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898231/YARN-7448-YARN-6592.009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux 012f43b9b5cc 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | YARN-6592 / 24494ae |
| maven | version: Apache Maven 

[jira] [Commented] (YARN-5673) [Umbrella] Re-write container-executor to improve security, extensibility, and portability

2017-11-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257315#comment-16257315
 ] 

Eric Yang commented on YARN-5673:
-

Hi [~miklos.szeg...@cloudera.com] 
4) For clean up container, it *might* require root privileges.  For example, if 
a docker container is stuck, and we need to clean up forcefully using {{docker 
rm-f}} as root user or a container runs multi-user code and generate local temp 
files that can only be cleaned up using privileged user access.  This might be 
the preferred method for clean up.

5) Some Linux (i.e. SuSE) prevents user to identify which PID is attached to 
which port unless the checking is using root privileges.  It is also possible a 
PID file is written and readable by root user only.  This is the reason that we 
*might* want to give root access to check container process tree and ports 
allocation.

{quote}
I absolutely agree. However, the tool already does this, does not it?
{quote}

Agree, rewrite is not required.  Enhancement and bug fix might be good enough 
to make this better.

> [Umbrella] Re-write container-executor to improve security, extensibility, 
> and portability
> --
>
> Key: YARN-5673
> URL: https://issues.apache.org/jira/browse/YARN-5673
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: container-executor Re-write Design Document.pdf
>
>
> As YARN adds support for new features that require administrator 
> privileges(such as support for network throttling and docker), we’ve had to 
> add new capabilities to the container-executor. This has led to a recognition 
> that the current container-executor security features as well as the code 
> could be improved. The current code is fragile and it’s hard to add new 
> features without causing regressions. Some of the improvements that need to 
> be made are -
> *Security*
> Currently the container-executor has limited security features. It relies 
> primarily on the permissions set on the binary but does little additional 
> security beyond that. There are few outstanding issues today -
> - No audit log
> - No way to disable features - network throttling and docker support are 
> built in and there’s no way to turn them off at a container-executor level
> - Code can be improved - a lot of the code switches users back and forth in 
> an arbitrary manner
> - No input validation - the paths, and files provided at invocation are not 
> validated or required to be in some specific location
> - No signing functionality - there is no way to enforce that the binary was 
> invoked by the NM and not by any other process
> *Code Issues*
> The code layout and implementation themselves can be improved. Some issues 
> there are -
> - No support for log levels - everything is logged and this can’t be turned 
> on or off
> - Extremely long set of invocation parameters(specifically during container 
> launch) which makes turning features on or off complicated
> - Poor test coverage - it’s easy to introduce regressions today due to the 
> lack of a proper test setup
> - Duplicate functionality - there is some amount of code duplication
> - Hard to make improvements or add new features due to the issues raised above
> *Portability*
>  - The container-executor mixes platform dependent APIs with platform 
> independent APIs making it hard to run it on multiple platforms. Allowing it 
> to run on multiple platforms also improves the overall code structure .
> One option is to improve the existing container-executor, however it might be 
> easier to start from scratch. That allows existing functionality to be 
> supported until we are ready to switch to the new code.
> This umbrella JIRA is to capture all the work required for the new code. I'm 
> going to work on a design doc for the changes - any suggestions or 
> improvements are welcome.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7218) ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2

2017-11-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257305#comment-16257305
 ] 

Hudson commented on YARN-7218:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13254 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13254/])
YARN-7218.  Decouple YARN Services REST API namespace from RM.  (eyang: rev 
0940e4f692441f16e742666ac925f71a083eab27)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/webapp/WebApps.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Service.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services-api/src/test/java/org/apache/hadoop/yarn/service/TestApiServer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebApp.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/ReadinessCheck.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/conf/RestApiConstants.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/api/records/Component.java


> ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2
> 
>
> Key: YARN-7218
> URL: https://issues.apache.org/jira/browse/YARN-7218
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7218.001.patch, YARN-7218.002.patch, 
> YARN-7218.003.patch, YARN-7218.004.patch
>
>
> In YARN-6626, there is a desire to have ability to run ApiServer REST API in 
> Resource Manager, this can eliminate the requirement to deploy another daemon 
> service for submitting docker applications.  In YARN-5698, a new UI has been 
> implemented as a separate web application.  There are some problems in the 
> arrangement that can cause conflicts of how Java session are being managed.  
> The root context of Resource Manager web application is /ws.  This is hard 
> coded in startWebapp method in ResourceManager.java.  This means all the 
> session management is applied to Web URL of /ws prefix.  /ui2 is independent 
> of /ws context, therefore session management code doesn't apply to /ui2.  
> This could be a session management problem, if servlet based code is going to 
> be introduced into /ui2 web application.
> ApiServer code base is designed as a separate web application.  There is no 
> easy way to inject a separate web application into the same /ws context 
> because ResourceManager is already setup to bind to RMWebServices.  Unless 
> ApiServer code is moved into RMWebServices, otherwise, they will not share 
> the same session management.
> The alternate solution is to keep ApiServer prefix URL independent of /ws 
> context.  However, this will be a departure from YARN web services naming 
> convention.  This can be loaded as a separate web application in Resource 
> Manager jetty server.  One possible proposal is /app/v1/services.  This can 
> keep ApiServer code modular and independent from Resource Manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7119) yarn rmadmin -updateNodeResource should be updated for resource types

2017-11-17 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257286#comment-16257286
 ] 

Daniel Templeton commented on YARN-7119:


bq. Currently if there is any non-alpha characters in units, still we extract 
the value part safely with units as "".

Exactly.  If I ask for "1024 Mi." you're going to quietly give me "1024".  You 
should instead raise the misconfig as an error.

bq. Point no. #11 will ensure arg[2] contains only numeric. But this check is 
required to ensure that key value (with "=" as delimiter) pairs has been passed.

In the new patch (007) my original #12 concern is no longer valid.  I see your 
point now that it didn't apply with 006 either.  I was discounting the regex 
match in the _else if_.

> yarn rmadmin -updateNodeResource should be updated for resource types
> -
>
> Key: YARN-7119
> URL: https://issues.apache.org/jira/browse/YARN-7119
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
> Attachments: YARN-7119.001.patch, YARN-7119.002.patch, 
> YARN-7119.002.patch, YARN-7119.003.patch, YARN-7119.004.patch, 
> YARN-7119.004.patch, YARN-7119.005.patch, YARN-7119.006.patch, 
> YARN-7119.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7510) Merge work for YARN-5881

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257275#comment-16257275
 ] 

Hadoop QA commented on YARN-7510:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m  
0s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
9s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  6s{color} | {color:orange} root: The patch generated 98 new + 2037 
unchanged - 37 fixed = 2135 total (was 2074) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 5 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 24s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m  7s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
40s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 19s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}175m  9s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Commented] (YARN-7218) ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2

2017-11-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257270#comment-16257270
 ] 

Eric Yang commented on YARN-7218:
-

Thank you [~billie.rinaldi].  I just committed this.

> ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2
> 
>
> Key: YARN-7218
> URL: https://issues.apache.org/jira/browse/YARN-7218
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7218.001.patch, YARN-7218.002.patch, 
> YARN-7218.003.patch, YARN-7218.004.patch
>
>
> In YARN-6626, there is a desire to have ability to run ApiServer REST API in 
> Resource Manager, this can eliminate the requirement to deploy another daemon 
> service for submitting docker applications.  In YARN-5698, a new UI has been 
> implemented as a separate web application.  There are some problems in the 
> arrangement that can cause conflicts of how Java session are being managed.  
> The root context of Resource Manager web application is /ws.  This is hard 
> coded in startWebapp method in ResourceManager.java.  This means all the 
> session management is applied to Web URL of /ws prefix.  /ui2 is independent 
> of /ws context, therefore session management code doesn't apply to /ui2.  
> This could be a session management problem, if servlet based code is going to 
> be introduced into /ui2 web application.
> ApiServer code base is designed as a separate web application.  There is no 
> easy way to inject a separate web application into the same /ws context 
> because ResourceManager is already setup to bind to RMWebServices.  Unless 
> ApiServer code is moved into RMWebServices, otherwise, they will not share 
> the same session management.
> The alternate solution is to keep ApiServer prefix URL independent of /ws 
> context.  However, this will be a departure from YARN web services naming 
> convention.  This can be loaded as a separate web application in Resource 
> Manager jetty server.  One possible proposal is /app/v1/services.  This can 
> keep ApiServer code modular and independent from Resource Manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5673) [Umbrella] Re-write container-executor to improve security, extensibility, and portability

2017-11-17 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5673?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257268#comment-16257268
 ] 

Miklos Szegedi commented on YARN-5673:
--

[~eyang], Thank you for your comment. I have a few questions.
Could you elaborate, why does clean up container (4.) require to be written in 
C? Similarly I do not think step 5. mentioned above requires it either. I agree 
that only steps that need root privileges or system calls need to be here 
everything else could go to node manager or a Java process run as root. I even 
have some concerns, that the docker code needs to reside in container executor. 
See YARN-7506 for reference, to discuss.
bq. We should try to contain root power by minimize the workflow and type that 
we support.
I absolutely agree. However, the tool already does this, does not it?

> [Umbrella] Re-write container-executor to improve security, extensibility, 
> and portability
> --
>
> Key: YARN-5673
> URL: https://issues.apache.org/jira/browse/YARN-5673
> Project: Hadoop YARN
>  Issue Type: New Feature
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: container-executor Re-write Design Document.pdf
>
>
> As YARN adds support for new features that require administrator 
> privileges(such as support for network throttling and docker), we’ve had to 
> add new capabilities to the container-executor. This has led to a recognition 
> that the current container-executor security features as well as the code 
> could be improved. The current code is fragile and it’s hard to add new 
> features without causing regressions. Some of the improvements that need to 
> be made are -
> *Security*
> Currently the container-executor has limited security features. It relies 
> primarily on the permissions set on the binary but does little additional 
> security beyond that. There are few outstanding issues today -
> - No audit log
> - No way to disable features - network throttling and docker support are 
> built in and there’s no way to turn them off at a container-executor level
> - Code can be improved - a lot of the code switches users back and forth in 
> an arbitrary manner
> - No input validation - the paths, and files provided at invocation are not 
> validated or required to be in some specific location
> - No signing functionality - there is no way to enforce that the binary was 
> invoked by the NM and not by any other process
> *Code Issues*
> The code layout and implementation themselves can be improved. Some issues 
> there are -
> - No support for log levels - everything is logged and this can’t be turned 
> on or off
> - Extremely long set of invocation parameters(specifically during container 
> launch) which makes turning features on or off complicated
> - Poor test coverage - it’s easy to introduce regressions today due to the 
> lack of a proper test setup
> - Duplicate functionality - there is some amount of code duplication
> - Hard to make improvements or add new features due to the issues raised above
> *Portability*
>  - The container-executor mixes platform dependent APIs with platform 
> independent APIs making it hard to run it on multiple platforms. Allowing it 
> to run on multiple platforms also improves the overall code structure .
> One option is to improve the existing container-executor, however it might be 
> easier to start from scratch. That allows existing functionality to be 
> supported until we are ready to switch to the new code.
> This umbrella JIRA is to capture all the work required for the new code. I'm 
> going to work on a design doc for the changes - any suggestions or 
> improvements are welcome.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7470) TestYarnNativeServices testRecoverComponentsAfterRMRestart fails due to BindingException

2017-11-17 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257254#comment-16257254
 ] 

Eric Yang commented on YARN-7470:
-

[~csingh] Are we at risk of introducing circular dependencies?  Service-core is 
used by Resource Manager, but the current project setup is to build Resource 
Manager first, then Service core.  The loading of Service core is done through 
reflection.  If we import resource manager test jar into service core, then it 
becomes a circular dependency.  Is it possible to move HATestUtil to 
hadoop-yarn-server-common to avoid the circular dependency?

[~jianhe] [~billie.rinaldi] At some point, we will need to re-organize the 
service project setup to make sure circular dependency doesn't get introduced.

> TestYarnNativeServices testRecoverComponentsAfterRMRestart fails due to 
> BindingException
> 
>
> Key: YARN-7470
> URL: https://issues.apache.org/jira/browse/YARN-7470
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chandni Singh
>Assignee: Chandni Singh
> Fix For: yarn-native-services
>
> Attachments: YARN-7470.001.patch
>
>
>  FAILURE! - in org.apache.hadoop.yarn.service.TestYarnNativeServices
> testRecoverComponentsAfterRMRestart(org.apache.hadoop.yarn.service.TestYarnNativeServices)
>   Time elapsed: 3.404 sec  <<< ERROR!
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.net.BindException: Problem binding to [10.22.8.166:52637] 
> java.net.BindException: Can't assign requested address; For more details see: 
>  http://wiki.apache.org/hadoop/BindException
>   at sun.nio.ch.Net.bind0(Native Method)
>   at sun.nio.ch.Net.bind(Net.java:433)
>   at sun.nio.ch.Net.bind(Net.java:425)
>   at 
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)
>   at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:545)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:1035)
>   at org.apache.hadoop.ipc.Server.(Server.java:2807)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:960)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6942) Add examples for placement constraints usage in applications

2017-11-17 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis reassigned YARN-6942:


Assignee: Panagiotis Garefalakis

> Add examples for placement constraints usage in applications
> 
>
> Key: YARN-6942
> URL: https://issues.apache.org/jira/browse/YARN-6942
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Konstantinos Karanasos
>Assignee: Panagiotis Garefalakis
>
> This JIRA will include examples of how the new {{PlacementConstraints}} API 
> can be used by various applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257245#comment-16257245
 ] 

Konstantinos Karanasos commented on YARN-7448:
--

[~asuresh], I see your points regarding the abstract methods and the equals. We 
can further discuss this offline, but if we have reached consensus that this 
should be the way to do it across the project, then it's fine with me (that's 
what I was trying to make sure).

Re: the test, I saw the existing one -- I was talking about an extra one to 
show better the usage of the new methods through the AMRMClient, so that 
somebody that looks at those tests can see the new API. But it can wait.

Current patch looks good to me. Thanks for the work, [~asuresh] and [~pgaref]!

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch, YARN-7448-YARN-6592.009.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7274) Ability to disable elasticity at leaf queue level

2017-11-17 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257228#comment-16257228
 ] 

Jason Lowe commented on YARN-7274:
--

bq. Basically we want to make sure queue.absolute_capacity <= 
queue.absolute_max_capacity and relax check of queue.capacity <= 
queue.maximum-capacity.

I haven't had time to fully digest this, but on the surface it seems OK to me.  
Even without the disabled elasticity request, the cap <= max-cap check can be 
problematic.  For example, a parent queue could have a small capacity and a 
very large max-capacity.  A child queue may want to fully use the parent's 
normal capacity but not all of its max capacity.  Therefore the leaf queue 
would want to set queue capacity to 100% but max capacity to something less 
than 100%.  Checking capacity <= max-capacity is wrong here.  The only thing we 
should care about is absolute capacity <= absolute max capacity otherwise the 
absolute max capacity is no longer an absolute maximum.


> Ability to disable elasticity at leaf queue level
> -
>
> Key: YARN-7274
> URL: https://issues.apache.org/jira/browse/YARN-7274
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Scott Brokaw
>Assignee: Zian Chen
>
> The 
> [documentation|https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html]
>  defines yarn.scheduler.capacity..maximum-capacity as "Maximum 
> queue capacity in percentage (%) as a float. This limits the elasticity for 
> applications in the queue. Defaults to -1 which disables it."
> However, setting this value to -1 sets maximum capacity to 100% but I thought 
> (perhaps incorrectly) that the intention of the -1 setting is that it would 
> disable elasticity.  This is confirmed looking at the code:
> {code:java}
> public static final float MAXIMUM_CAPACITY_VALUE = 100;
> public static final float DEFAULT_MAXIMUM_CAPACITY_VALUE = -1.0f;
> ..
> maxCapacity = (maxCapacity == DEFAULT_MAXIMUM_CAPACITY_VALUE) ? 
> MAXIMUM_CAPACITY_VALUE : maxCapacity;
> {code}
> The sum of yarn.scheduler.capacity..capacity for all queues, at 
> each level, must be equal to 100 but for 
> yarn.scheduler.capacity..maximum-capacity this value is actually 
> a percentage of the entire cluster not just the parent queue.  Yet it can not 
> be set lower then the leaf queue's capacity setting. This seems to make it 
> impossible to disable elasticity at a leaf queue level.
> This improvement is proposing that YARN have the ability to have elasticity 
> disabled at a leaf queue level even if a parent queue permits elasticity by 
> having a yarn.scheduler.capacity..maximum-capacity greater then 
> it's yarn.scheduler.capacity..capacity



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7218) ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2

2017-11-17 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257208#comment-16257208
 ] 

Billie Rinaldi commented on YARN-7218:
--

I have convinced myself that the unit test and findbugs issues are not related 
to this patch, so I am +1 for patch 004.

> ApiServer REST API naming convention /ws/v1 is already used in Hadoop v2
> 
>
> Key: YARN-7218
> URL: https://issues.apache.org/jira/browse/YARN-7218
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, applications
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: YARN-7218.001.patch, YARN-7218.002.patch, 
> YARN-7218.003.patch, YARN-7218.004.patch
>
>
> In YARN-6626, there is a desire to have ability to run ApiServer REST API in 
> Resource Manager, this can eliminate the requirement to deploy another daemon 
> service for submitting docker applications.  In YARN-5698, a new UI has been 
> implemented as a separate web application.  There are some problems in the 
> arrangement that can cause conflicts of how Java session are being managed.  
> The root context of Resource Manager web application is /ws.  This is hard 
> coded in startWebapp method in ResourceManager.java.  This means all the 
> session management is applied to Web URL of /ws prefix.  /ui2 is independent 
> of /ws context, therefore session management code doesn't apply to /ui2.  
> This could be a session management problem, if servlet based code is going to 
> be introduced into /ui2 web application.
> ApiServer code base is designed as a separate web application.  There is no 
> easy way to inject a separate web application into the same /ws context 
> because ResourceManager is already setup to bind to RMWebServices.  Unless 
> ApiServer code is moved into RMWebServices, otherwise, they will not share 
> the same session management.
> The alternate solution is to keep ApiServer prefix URL independent of /ws 
> context.  However, this will be a departure from YARN web services naming 
> convention.  This can be loaded as a separate web application in Resource 
> Manager jetty server.  One possible proposal is /app/v1/services.  This can 
> keep ApiServer code modular and independent from Resource Manager.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Panagiotis Garefalakis (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257195#comment-16257195
 ] 

Panagiotis Garefalakis edited comment on YARN-7448 at 11/17/17 4:49 PM:


[~kkaranasos] [~asuresh] thanks for the comments, I am attaching the current 
version of the patch based on v007 where schedulingRequests method is the only 
setter for the schedulingRequests list.



was (Author: pgaref):
[~kkaranasos] [~asuresh] thanks for the comments, I am attaching the current 
version of the patch based on v7 where schedulingRequests method is the only 
setter for the schedulingRequests list.


> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch, YARN-7448-YARN-6592.009.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6750) Add a configuration to cap how much a NM can be overallocated

2017-11-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6750:
-
Attachment: YARN-6750-YARN-1011.00.patch

> Add a configuration to cap how much a NM can be overallocated
> -
>
> Key: YARN-6750
> URL: https://issues.apache.org/jira/browse/YARN-6750
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6750-YARN-1011.00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.009.patch

[~kkaranasos] [~asuresh] thanks for the comments, I am attaching the current 
version of the patch based on v7 where schedulingRequests method is the only 
setter for the schedulingRequests list.


> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch, YARN-7448-YARN-6592.009.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257190#comment-16257190
 ] 

Hadoop QA commented on YARN-7448:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-6592 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
36s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
27s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} YARN-6592 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
YARN-6592 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} YARN-6592 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 31 unchanged - 0 fixed = 40 total (was 31) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 26s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
46s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-7448 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898224/YARN-7448-YARN-6592.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux a926d531a664 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-7515) HA related tests fails in trunk

2017-11-17 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257182#comment-16257182
 ] 

Jason Lowe commented on YARN-7515:
--

There aren't enough logs from the precommit build to prove this, but I believe 
this could be a duplicate of YARN-6647.  All of these tests did not fail in the 
usual sense but were marked as timed out, but actually I think this was because 
these tests called System.exit which was misinterpreted as a surefire timeout.  
From 
https://builds.apache.org/job/PreCommit-YARN-Build/18498/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt:
{noformat}

[...]
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-yarn-server-resourcemanager: ExecutionException: 
java.lang.RuntimeException: The forked VM terminated without properly saying 
goodbye. VM crash or System.exit called?
{noformat}

YARN-6647 describes how ZKRMStateStore can cause the async dispatcher to call 
System.exit because it receives an unhandled exception due to badly managed 
InterruptedException received when the RM is trying to shut down.


> HA related tests fails in trunk
> ---
>
> Key: YARN-7515
> URL: https://issues.apache.org/jira/browse/YARN-7515
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-beta1
>Reporter: Bibin A Chundatt
>
>   https://builds.apache.org/job/PreCommit-YARN-Build/18498/testReport/
> {noformat}
> org.apache.hadoop.yarn.server.resourcemanager.TestRMStoreCommands
>   
> org.apache.hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA
>   
> org.apache.hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA
>   
> org.apache.hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257150#comment-16257150
 ] 

Arun Suresh edited comment on YARN-7448 at 11/17/17 3:54 PM:
-

[~kkaranasos]

bq. setSchedulingRequests seems to do nothing at the moment
bq. all other methods of this class are abstract; builders are then used to 
call the getters and setters. Is there a reason we chose to do it differently 
for the get/setSchedulingRequests?
This is intentional. New methods should not be abstract because the 
AllocateRequest is public API and marked Public+Stable, thus clients should be 
allowed to use an older version of interface in their code without throwing 
exception. If the methods were abstract (without their default implementation) 
then they would have to modify their existing libraries.

bq. I think the equals and the hashcode should be implemented in the 
SchedulingRequest and not the SchedulingRequestPBImpl
I've been part of discussions about specific this before in other JIRAs. For 
most cases actually using the protobuf equals and hashcode are actually better 
- and more efficient. Other objects such as PlacementConstraint etc. which are 
actually polymorphic and have true inheritance and sub classes - then we can 
push up the equals and hashcode to avoid code duplication.

bq. Let's add a simple test that creates an allocate request with 
SchedulingRequests instead of ResourceRequests.
[This|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java#L434-L436]
 test is already doing that. It instantiates all sub-fields and verifies if 
they can individually be serialized or not.

To be honest, v007 of the patch is good (if as remove of schedulingRequestList 
/ schedulingRequests from the builder).
Can you revert the remaining changes from v008 ?



was (Author: asuresh):
[~kkaranasos]

bq. setSchedulingRequests seems to do nothing at the moment
bq. all other methods of this class are abstract; builders are then used to 
call the getters and setters. Is there a reason we chose to do it differently 
for the get/setSchedulingRequests?
This is intentional. New methods should not be abstract because the 
AllocateRequest is public API and marked Public+Stable, thus clients should be 
allowed to use an older version of interface in their code without throwing 
exception. If the methods were abstract (without their default implementation) 
then they would have to modify their existing libraries.

bq. I think the equals and the hashcode should be implemented in the 
SchedulingRequest and not the SchedulingRequestPBImpl
I think these discussions before in other JIRAs. For most cases actually using 
the protobuf equals and hashcode are actually better - and more efficient. 
Other objects such as PlacementConstraint etc. which are actually polymorphic 
and have true inheritance and sub classes - then we can push up the equals and 
hashcode to avoid code duplication.

bq. Let's add a simple test that creates an allocate request with 
SchedulingRequests instead of ResourceRequests.
[This|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java#L434-L436]
 test is already doing that. It instantiates all sub-fields and verifies if 
they can individually be serialized or not.

To be honest, v007 of the patch is good (if as remove of schedulingRequestList 
/ schedulingRequests from the builder).
Can you revert the remaining changes from v008 ?


> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257150#comment-16257150
 ] 

Arun Suresh commented on YARN-7448:
---

[~kkaranasos]

bq. setSchedulingRequests seems to do nothing at the moment
bq. all other methods of this class are abstract; builders are then used to 
call the getters and setters. Is there a reason we chose to do it differently 
for the get/setSchedulingRequests?
This is intentional. New methods should not be abstract because the 
AllocateRequest is public API and marked Public+Stable, thus clients should be 
allowed to use an older version of interface in their code without throwing 
exception. If the methods were abstract (without their default implementation) 
then they would have to modify their existing libraries.

bq. I think the equals and the hashcode should be implemented in the 
SchedulingRequest and not the SchedulingRequestPBImpl
I think these discussions before in other JIRAs. For most cases actually using 
the protobuf equals and hashcode are actually better - and more efficient. 
Other objects such as PlacementConstraint etc. which are actually polymorphic 
and have true inheritance and sub classes - then we can push up the equals and 
hashcode to avoid code duplication.

bq. Let's add a simple test that creates an allocate request with 
SchedulingRequests instead of ResourceRequests.
[This|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/api/TestPBImplRecords.java#L434-L436]
 test is already doing that. It instantiates all sub-fields and verifies if 
they can individually be serialized or not.

To be honest, v007 of the patch is good (if as remove of schedulingRequestList 
/ schedulingRequests from the builder).
Can you revert the remaining changes from v008 ?


> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7213) [Umbrella] Test and validate HBase-2.0.x with Atsv2

2017-11-17 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257127#comment-16257127
 ] 

Zheng Hu edited comment on YARN-7213 at 11/17/17 3:41 PM:
--

[~haibo.chen],  I written a UT based on your HBase folks discussion.  Actually 
it'll be failed running on both 2.0.0-beta-1 and branch-1.1 branch (the logic 
of Filter in branch-1.1 does not changed), and it should be failed logically.  

The  testDirectScan UT your provided,  What is the data in this table?  I want 
to test it independently of the yarn test environment, and just in hbase 
MiniCluster. 

{code}
  @Test
  public void testFilterListWithSingleColumnValueFilterAndFamilyFilter() throws 
IOException {
TableName tn = TableName.valueOf(name.getMethodName());
try (Table table = TEST_UTIL.createTable(tn, new String[] { "cf1", "cf2" 
})) {
  String[][] cells = new String[][] {
  new String[] { "row1", "cf1", "cq1", "value1" },
  new String[] { "row1", "cf1", "cq2", "value1" },
  new String[] { "row1", "cf2", "cq1", "value2" },
  };
  for (int i = 0; i < cells.length; i++) {
Put put =
new 
Put(Bytes.toBytes(cells[i][0])).addColumn(Bytes.toBytes(cells[i][1]),
  Bytes.toBytes(cells[i][2]), Bytes.toBytes(cells[i][3]));
table.put(put);
  }

  SingleColumnValueFilter svf1 =
  new SingleColumnValueFilter(Bytes.toBytes("cf1"), 
Bytes.toBytes("cq2"),
  CompareOperator.EQUAL, new 
BinaryComparator(Bytes.toBytes("value1")));

  FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
  filterList.addFilter(svf1);
  filterList.addFilter(new FamilyFilter(CompareOperator.EQUAL, new 
BinaryComparator("cf2"
  .getBytes(;

  Scan scan = new Scan().setFilter(filterList); 

  int i = 0;
  for (Result result : table.getScanner(scan)) {
i++;
  }
  Assert.assertEquals("first scan", 1, i);

  svf1.setFilterIfMissing(true);
  i = 0;
  for (Result result : table.getScanner(scan)) {
i++;
  }
  Assert.assertEquals("second scan", 1, i);
}
  }
{code}


was (Author: openinx):
[~haibo.chen],  I written a UT based on your HBase folks discussion.  Actually 
it'll be failed running on both 2.0.0-beta-1 and branch-1.1 branch (the logic 
of Filter in branch-1.1 does not changed), and it should be failed logically.  

The  testDirectScan UT your provided,  What is the data in this table?  I want 
to test it independently of the yarn test environment, and just in hbase 
MiniCluster. 

{code}
  @Test
  public void testFilterListWithSingleColumnValueFilterAndFamilyFilter() throws 
IOException {
TableName tn = TableName.valueOf(name.getMethodName());
try (Table table = TEST_UTIL.createTable(tn, new String[] { "cf1", "cf2" 
})) {
  String[][] cells = new String[][] {
  new String[] { "row1", "cf1", "cq1", "value1" },
  new String[] { "row1", "cf1", "cq2", "value1" },
  new String[] { "row1", "cf2", "cq1", "value2" },
  };
  for (int i = 0; i < cells.length; i++) {
Put put =
new 
Put(Bytes.toBytes(cells[i][0])).addColumn(Bytes.toBytes(cells[i][1]),
  Bytes.toBytes(cells[i][2]), Bytes.toBytes(cells[i][3]));
table.put(put);
  }

  SingleColumnValueFilter svf1 =
  new SingleColumnValueFilter(Bytes.toBytes("cf1"), 
Bytes.toBytes("cq2"),
  CompareOperator.EQUAL, new 
BinaryComparator(Bytes.toBytes("value1")));

  FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
  filterList.addFilter(svf1);
  filterList.addFilter(new FamilyFilter(CompareOperator.EQUAL, new 
BinaryComparator("cf2"
  .getBytes(;

  Scan scan = new Scan().setFilter(filterList); // copy the scan

  int i = 0;
  for (Result result : table.getScanner(scan)) {
i++;
  }
  Assert.assertEquals("first scan", 1, i);

  svf1.setFilterIfMissing(true);
  i = 0;
  for (Result result : table.getScanner(scan)) {
i++;
  }
  Assert.assertEquals("second scan", 1, i);
}
  }
{code}

> [Umbrella] Test and validate HBase-2.0.x with Atsv2
> ---
>
> Key: YARN-7213
> URL: https://issues.apache.org/jira/browse/YARN-7213
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7213.prelim.patch, YARN-7213.wip.patch
>
>
> Hbase-2.0.x officially support hadoop-alpha compilations. And also they are 
> getting ready for Hadoop-beta release so that HBase can release their 
> versions compatible with Hadoop-beta. So, this JIRA is to keep track of 
> HBase-2.0 integration issues. 



--

[jira] [Commented] (YARN-7213) [Umbrella] Test and validate HBase-2.0.x with Atsv2

2017-11-17 Thread Zheng Hu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257127#comment-16257127
 ] 

Zheng Hu commented on YARN-7213:


[~haibo.chen],  I written a UT based on your HBase folks discussion.  Actually 
it'll be failed running on both 2.0.0-beta-1 and branch-1.1 branch (the logic 
of Filter in branch-1.1 does not changed), and it should be failed logically.  

The  testDirectScan UT your provided,  What is the data in this table?  I want 
to test it independently of the yarn test environment, and just in hbase 
MiniCluster. 

{code}
  @Test
  public void testFilterListWithSingleColumnValueFilterAndFamilyFilter() throws 
IOException {
TableName tn = TableName.valueOf(name.getMethodName());
try (Table table = TEST_UTIL.createTable(tn, new String[] { "cf1", "cf2" 
})) {
  String[][] cells = new String[][] {
  new String[] { "row1", "cf1", "cq1", "value1" },
  new String[] { "row1", "cf1", "cq2", "value1" },
  new String[] { "row1", "cf2", "cq1", "value2" },
  };
  for (int i = 0; i < cells.length; i++) {
Put put =
new 
Put(Bytes.toBytes(cells[i][0])).addColumn(Bytes.toBytes(cells[i][1]),
  Bytes.toBytes(cells[i][2]), Bytes.toBytes(cells[i][3]));
table.put(put);
  }

  SingleColumnValueFilter svf1 =
  new SingleColumnValueFilter(Bytes.toBytes("cf1"), 
Bytes.toBytes("cq2"),
  CompareOperator.EQUAL, new 
BinaryComparator(Bytes.toBytes("value1")));

  FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL);
  filterList.addFilter(svf1);
  filterList.addFilter(new FamilyFilter(CompareOperator.EQUAL, new 
BinaryComparator("cf2"
  .getBytes(;

  Scan scan = new Scan().setFilter(filterList); // copy the scan

  int i = 0;
  for (Result result : table.getScanner(scan)) {
i++;
  }
  Assert.assertEquals("first scan", 1, i);

  svf1.setFilterIfMissing(true);
  i = 0;
  for (Result result : table.getScanner(scan)) {
i++;
  }
  Assert.assertEquals("second scan", 1, i);
}
  }
{code}

> [Umbrella] Test and validate HBase-2.0.x with Atsv2
> ---
>
> Key: YARN-7213
> URL: https://issues.apache.org/jira/browse/YARN-7213
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: YARN-7213.prelim.patch, YARN-7213.wip.patch
>
>
> Hbase-2.0.x officially support hadoop-alpha compilations. And also they are 
> getting ready for Hadoop-beta release so that HBase can release their 
> versions compatible with Hadoop-beta. So, this JIRA is to keep track of 
> HBase-2.0 integration issues. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7448) [API] Add SchedulingRequest to the AllocateRequest

2017-11-17 Thread Panagiotis Garefalakis (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated YARN-7448:
-
Attachment: YARN-7448-YARN-6592.008.patch

> [API] Add SchedulingRequest to the AllocateRequest
> --
>
> Key: YARN-7448
> URL: https://issues.apache.org/jira/browse/YARN-7448
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Panagiotis Garefalakis
> Attachments: YARN-7448-YARN-6592.001.patch, 
> YARN-7448-YARN-6592.002.patch, YARN-7448-YARN-6592.003.patch, 
> YARN-7448-YARN-6592.004.patch, YARN-7448-YARN-6592.005.patch, 
> YARN-7448-YARN-6592.006.patch, YARN-7448-YARN-6592.007.patch, 
> YARN-7448-YARN-6592.008.patch
>
>
> YARN-6594 introduces the {{SchedulingRequest}}. This JIRA tracks the 
> inclusion of the SchedulingRequest into the AllocateRequest.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1015) FS should watch node resource utilization and allocate opportunistic containers if appropriate

2017-11-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257112#comment-16257112
 ] 

Haibo Chen commented on YARN-1015:
--

Thanks [~asuresh] and [~miklos.szeg...@cloudera.com] for the reviews! Will 
commit it to branch YARN-1011 shortly.

> FS should watch node resource utilization and allocate opportunistic 
> containers if appropriate
> --
>
> Key: YARN-1015
> URL: https://issues.apache.org/jira/browse/YARN-1015
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun C Murthy
>Assignee: Haibo Chen
> Attachments: YARN-1015-YARN-1011.00.patch, 
> YARN-1015-YARN-1011.01.patch, YARN-1015-YARN-1011.02.patch, 
> YARN-1015-YARN-1011.prelim.patch
>
>
> FS should looks at resource utilization of nodes (provided by NM in 
> heartbeat) and allocate opportunistic containers if the resource utilization 
> of the node is below its allocation threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7510) Merge work for YARN-5881

2017-11-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7510:
--
Attachment: YARN-7510.002.patch

branch is rebased and running jenkins again on branch patch (combined)

> Merge work for YARN-5881
> 
>
> Key: YARN-7510
> URL: https://issues.apache.org/jira/browse/YARN-7510
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7510.001.patch, YARN-7510.002.patch
>
>
> Merge YARN-5881 work



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7469) Capacity Scheduler Intra-queue preemption: User can starve if newest app is exactly at user limit

2017-11-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257011#comment-16257011
 ] 

Sunil G commented on YARN-7469:
---

Pushed to 2.9 as well.

> Capacity Scheduler Intra-queue preemption: User can starve if newest app is 
> exactly at user limit
> -
>
> Key: YARN-7469
> URL: https://issues.apache.org/jira/browse/YARN-7469
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.8.3, 3.0.0, 3.1.0, 2.10.0, 2.9.1
>
> Attachments: UnitTestToShowStarvedUser.patch, YARN-7469.001.patch
>
>
> Queue Configuration:
> - Total Memory: 20GB
> - 2 Queues
> -- Queue1
> --- Memory: 10GB
> --- MULP: 10%
> --- ULF: 2.0
> - Minimum Container Size: 0.5GB
> Use Case:
> - User1 submits app1 to Queue1 and consumes 20GB
> - User2 submits app2 to Queue1 and requests 7.5GB
> - Preemption monitor preempts 7.5GB from app1. Capacity Scheduler gives those 
> resources to User2
> - User 3 submits app3 to Queue1. To begin with, app3 is requesting 1 
> container for the AM.
> - Preemption monitor never preempts a container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7469) Capacity Scheduler Intra-queue preemption: User can starve if newest app is exactly at user limit

2017-11-17 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7469:
--
Fix Version/s: 2.9.1

> Capacity Scheduler Intra-queue preemption: User can starve if newest app is 
> exactly at user limit
> -
>
> Key: YARN-7469
> URL: https://issues.apache.org/jira/browse/YARN-7469
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, yarn
>Affects Versions: 2.9.0, 3.0.0-beta1, 2.8.2
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 2.8.3, 3.0.0, 3.1.0, 2.10.0, 2.9.1
>
> Attachments: UnitTestToShowStarvedUser.patch, YARN-7469.001.patch
>
>
> Queue Configuration:
> - Total Memory: 20GB
> - 2 Queues
> -- Queue1
> --- Memory: 10GB
> --- MULP: 10%
> --- ULF: 2.0
> - Minimum Container Size: 0.5GB
> Use Case:
> - User1 submits app1 to Queue1 and consumes 20GB
> - User2 submits app2 to Queue1 and requests 7.5GB
> - Preemption monitor preempts 7.5GB from app1. Capacity Scheduler gives those 
> resources to User2
> - User 3 submits app3 to Queue1. To begin with, app3 is requesting 1 
> container for the AM.
> - Preemption monitor never preempts a container.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256872#comment-16256872
 ] 

Hadoop QA commented on YARN-5150:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
26m 11s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m  2s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | YARN-5150 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898187/YARN-5150.002.patch |
| Optional Tests |  asflicense  shadedclient  |
| uname | Linux fcfbe59f7826 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 5f0b238 |
| maven | version: Apache Maven 3.3.9 |
| Max. process+thread count | 336 (vs. ulimit of 5000) |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/18553/console |
| Powered by | Apache Yetus 0.7.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6523) Newly retrieved security Tokens are sent as part of each heartbeat to each node from RM which is not desirable in large cluster

2017-11-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16256843#comment-16256843
 ] 

Hadoop QA commented on YARN-6523:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
1s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 36s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 9 new + 
224 unchanged - 0 fixed = 233 total (was 224) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
56s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 24m 17s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m  3s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}125m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.TestLocalDirsHandlerService |
|   | hadoop.yarn.server.nodemanager.TestLinuxContainerExecutorWithMocks |
|   | hadoop.yarn.server.nodemanager.TestNodeManagerReboot |
|   | 
hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor |
|   | 

[jira] [Updated] (YARN-5150) [YARN-3368] Add a sunburst chart view for queues/applications resource usage to new YARN UI

2017-11-17 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/YARN-5150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gergely Novák updated YARN-5150:

Attachment: Screen Shot 2017-11-17 at 12.05.28.png

> [YARN-3368] Add a sunburst chart view for queues/applications resource usage 
> to new YARN UI
> ---
>
> Key: YARN-5150
> URL: https://issues.apache.org/jira/browse/YARN-5150
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Gergely Novák
> Attachments: Screen Shot 2017-11-17 at 12.05.28.png, 
> YARN-5150.001.patch, YARN-5150.002.patch
>
>
> An example of sunburst chart: https://bl.ocks.org/kerryrodden/7090426.
> If we can introduce it to YARN UI, admins can easily get understanding of 
> relative resource usages and configured capacities for queues/applications.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >