[jira] [Commented] (YARN-7159) Normalize unit of resource objects in RM and avoid to do unit conversion in critical path

2017-10-12 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203070#comment-16203070
 ] 

Manikandan R commented on YARN-7159:


Fixed checkstyle and findbugs warnings. Attached patch for the same.

> Normalize unit of resource objects in RM and avoid to do unit conversion in 
> critical path
> -
>
> Key: YARN-7159
> URL: https://issues.apache.org/jira/browse/YARN-7159
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7159.001.patch, YARN-7159.002.patch
>
>
> Currently resource conversion could happen in critical code path when 
> different unit is specified by client. This could impact performance and 
> throughput of RM a lot. We should do unit normalization when resource passed 
> to RM and avoid expensive unit conversion every time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7159) Normalize unit of resource objects in RM and avoid to do unit conversion in critical path

2017-10-12 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-7159:
---
Attachment: YARN-7159.002.patch

> Normalize unit of resource objects in RM and avoid to do unit conversion in 
> critical path
> -
>
> Key: YARN-7159
> URL: https://issues.apache.org/jira/browse/YARN-7159
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7159.001.patch, YARN-7159.002.patch
>
>
> Currently resource conversion could happen in critical code path when 
> different unit is specified by client. This could impact performance and 
> throughput of RM a lot. We should do unit normalization when resource passed 
> to RM and avoid expensive unit conversion every time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7323) Some changes in service REST API

2017-10-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7323:
--
Attachment: YARN-7323.yarn-native-services.01.patch

> Some changes in service REST API
> 
>
> Key: YARN-7323
> URL: https://issues.apache.org/jira/browse/YARN-7323
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Jian He
> Attachments: YARN-7323.yarn-native-services.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4943) Add support to collect actual resource usage from cgroups

2017-10-12 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16203066#comment-16203066
 ] 

Varun Vasudev commented on YARN-4943:
-

[~miklos.szeg...@cloudera.com] - please feel free to take it over. Thanks!

> Add support to collect actual resource usage from cgroups
> -
>
> Key: YARN-4943
> URL: https://issues.apache.org/jira/browse/YARN-4943
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>
> We should add support to collect actual resource usage from Cgroups(if 
> they're enabled) - it's more accurate and it can give you more detailed 
> information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-4943) Add support to collect actual resource usage from cgroups

2017-10-12 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev reassigned YARN-4943:
---

Assignee: (was: Varun Vasudev)

> Add support to collect actual resource usage from cgroups
> -
>
> Key: YARN-4943
> URL: https://issues.apache.org/jira/browse/YARN-4943
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Varun Vasudev
>
> We should add support to collect actual resource usage from Cgroups(if 
> they're enabled) - it's more accurate and it can give you more detailed 
> information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7270) Resource#getVirtualCores() does unsafe casting from long to int.

2017-10-12 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7270:
---
Attachment: YARN-7270.004.patch

Uploaded v4 to fix the style issue.

> Resource#getVirtualCores() does unsafe casting from long to int.
> 
>
> Key: YARN-7270
> URL: https://issues.apache.org/jira/browse/YARN-7270
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7270.001.patch, YARN-7270.002.patch, 
> YARN-7270.003.patch, YARN-7270.004.patch
>
>
> Class {{Resource}} has three sub classes(FixedValueResource, 
> LightWeightResource, and ResourcePBImpl). Only FixedValueResource handle 
> long-to-int casting nicely. The other two didn't. This bug is introduced by 
> resource type feature and causes several unit test failures. For example:
> {code}
> Error Message
> expected:<> but was:<>
> Stacktrace
> java.lang.AssertionError: expected:<> but 
> was:<>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt.testHeadroomWithBlackListedNodes(TestFSAppAttempt.java:325)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7270) Resource#getVirtualCores() does unsafe casting from long to int.

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202937#comment-16202937
 ] 

Hadoop QA commented on YARN-7270:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  3m 
46s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  5m 55s{color} 
| {color:red} hadoop-yarn-project_hadoop-yarn generated 1 new + 79 unchanged - 
0 fixed = 80 total (was 79) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 56s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 1 new + 12 unchanged - 0 fixed = 13 total (was 12) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
5s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7270 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891861/YARN-7270.003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 14986bd52084 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e46d5bb |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| javac | 

[jira] [Commented] (YARN-7311) TestRMWebServicesReservation doesn't really test fair scheduler

2017-10-12 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202936#comment-16202936
 ] 

Subru Krishnan commented on YARN-7311:
--

[~yufeigu], thanks for working on this. The patch looks reasonable but there 
still seems to be a quite a few failed unit tests (Yetus)? 

Can you try creating a reservation and submitting a sample MR job it with FS in 
a single node?

cc-ing [~seanpo03] as he's the author of {{TestRMWebServicesReservation}}.

> TestRMWebServicesReservation doesn't really test fair scheduler
> ---
>
> Key: YARN-7311
> URL: https://issues.apache.org/jira/browse/YARN-7311
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, reservation system
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7311.001.patch, YARN-7311.WIP.patch
>
>
> YARN-4248 introduced the REST API for submit/update/delete Reservations. 
> Class {{TestRMWebServicesReservation}} intends to test both FS and CS. The 
> test cases designed for fair scheduler actually test capacity scheduler. The 
> following code in method {{configureServlets}} shows it sets the scheduler to 
> CS even test cases are for fair scheduler.
> {code}
>   conf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
>   ResourceScheduler.class);
>   CapacitySchedulerConfiguration csconf =
>   new CapacitySchedulerConfiguration(conf);
>   String[] queues = { "default", "dedicated" };
>   csconf.setQueues("root", queues);
>   csconf.setCapacity("root.default", 50.0f);
>   csconf.setCapacity("root.dedicated", 50.0f);
>   csconf.setReservable("root.dedicated", true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7224) Support GPU isolation for docker container

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202927#comment-16202927
 ] 

Hadoop QA commented on YARN-7224:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 55s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  1s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 32 new + 379 unchanged - 11 fixed = 411 total (was 390) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  3s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
10s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 25s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
31s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 45s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| 

[jira] [Commented] (YARN-7311) TestRMWebServicesReservation doesn't really test fair scheduler

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202922#comment-16202922
 ] 

Hadoop QA commented on YARN-7311:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 37s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 19 unchanged - 1 fixed = 19 total (was 20) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 17s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 29s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 84m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt |
|   | hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore |
|   | hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesReservation |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler |
|   | hadoop.yarn.server.resourcemanager.reservation.TestReservationSystem |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7311 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891854/YARN-7311.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1595123d429d 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Commented] (YARN-7320) Duplicate LiteralByteStrings in SystemCredentialsForAppsProto.credentialsForApp_

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202911#comment-16202911
 ] 

Hadoop QA commented on YARN-7320:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 1 new + 27 unchanged - 1 fixed = 28 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
3s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7320 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891862/YARN-7320.02.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 67bf0dbfeeff 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e46d5bb |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17904/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17904/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 

[jira] [Commented] (YARN-4943) Add support to collect actual resource usage from cgroups

2017-10-12 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202875#comment-16202875
 ] 

Miklos Szegedi commented on YARN-4943:
--

[~vvasudev], YARN-6668 is very similar to this jira. Would you like to work on 
this feature or should I continue YARN-6668/YARN-7064?

> Add support to collect actual resource usage from cgroups
> -
>
> Key: YARN-4943
> URL: https://issues.apache.org/jira/browse/YARN-4943
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>
> We should add support to collect actual resource usage from Cgroups(if 
> they're enabled) - it's more accurate and it can give you more detailed 
> information.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7246) Fix the default docker binary path

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202867#comment-16202867
 ] 

Hadoop QA commented on YARN-7246:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
12s{color} | {color:green} branch-2.8.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} branch-2.8.2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.8.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:d0a0f24 |
| JIRA Issue | YARN-7246 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891852/YARN-7246-branch-2.8.2.007.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux 644c51af022b 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2.8.2 / e52ced3 |
| Default Java | 1.7.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17902/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17902/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix the default docker binary path
> --
>
> Key: YARN-7246
> URL: https://issues.apache.org/jira/browse/YARN-7246
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Blocker
> Attachments: YARN-7246-branch-2.8.2.001.patch, 
> YARN-7246-branch-2.8.2.002.patch, YARN-7246-branch-2.8.2.003.patch, 
> YARN-7246-branch-2.8.2.004.patch, YARN-7246-branch-2.8.2.005.patch, 
> YARN-7246-branch-2.8.2.006.patch, YARN-7246-branch-2.8.2.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7270) Resource#getVirtualCores() does unsafe casting from long to int.

2017-10-12 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202858#comment-16202858
 ] 

Yufei Gu commented on YARN-7270:


Uploaded patch v3 since class {{TestResource}} is missing in patch v2. 
[~wangda], [~sunilg] and [~templedf], can you review this? Several test cases 
failed because of the bug.

> Resource#getVirtualCores() does unsafe casting from long to int.
> 
>
> Key: YARN-7270
> URL: https://issues.apache.org/jira/browse/YARN-7270
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7270.001.patch, YARN-7270.002.patch, 
> YARN-7270.003.patch
>
>
> Class {{Resource}} has three sub classes(FixedValueResource, 
> LightWeightResource, and ResourcePBImpl). Only FixedValueResource handle 
> long-to-int casting nicely. The other two didn't. This bug is introduced by 
> resource type feature and causes several unit test failures. For example:
> {code}
> Error Message
> expected:<> but was:<>
> Stacktrace
> java.lang.AssertionError: expected:<> but 
> was:<>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt.testHeadroomWithBlackListedNodes(TestFSAppAttempt.java:325)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7320) Duplicate LiteralByteStrings in SystemCredentialsForAppsProto.credentialsForApp_

2017-10-12 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated YARN-7320:
-
Attachment: YARN-7320.02.patch

Addressed checkstyle comments.

> Duplicate LiteralByteStrings in 
> SystemCredentialsForAppsProto.credentialsForApp_
> 
>
> Key: YARN-7320
> URL: https://issues.apache.org/jira/browse/YARN-7320
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: YARN-7320.01.patch, YARN-7320.02.patch
>
>
> Using jxray (www.jxray.com) I've analyzed several heap dumps from YARN 
> Resource Manager running in a big cluster. The tool uncovered several sources 
> of memory waste. One problem, which results in wasting more than a quarter of 
> all memory, is a large number of duplicate {{LiteralByteString}} objects 
> coming from the following reference chain:
> {code}
> 1,011,810K (26.9%): byte[]: 5416705 / 100% dup arrays (22108 unique)
> ↖com.google.protobuf.LiteralByteString.bytes
> ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$.credentialsForApp_
> ↖{j.u.ArrayList}
> ↖j.u.Collections$UnmodifiableRandomAccessList.c
> ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$NodeHeartbeatResponseProto.systemCredentialsForApps_
> ↖org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.NodeHeartbeatResponsePBImpl.proto
> ↖org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl.latestNodeHeartBeatResponse
> ↖org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.rmNode
> ...
> {code}
> That is, collectively reference chains that look as above hold in memory 5.4 
> million {{LiteralByteString}} objects, but only ~22 thousand of these objects 
> are unique. Deduplicating these objects, e.g. using a Google Object Interner 
> instance, would save ~1GB of memory.
> It looks like the main place where the above {{LiteralByteString}}s are 
> created and attached to the {{SystemCredentialsForAppsProto}} objects is in 
> {{NodeHeartbeatResponsePBImpl.java}}, method 
> {{addSystemCredentialsToProto()}}. Probably adding a call to an interner 
> there will fix the problem. wi 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7270) Resource#getVirtualCores() does unsafe casting from long to int.

2017-10-12 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7270:
---
Attachment: YARN-7270.003.patch

> Resource#getVirtualCores() does unsafe casting from long to int.
> 
>
> Key: YARN-7270
> URL: https://issues.apache.org/jira/browse/YARN-7270
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.1.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7270.001.patch, YARN-7270.002.patch, 
> YARN-7270.003.patch
>
>
> Class {{Resource}} has three sub classes(FixedValueResource, 
> LightWeightResource, and ResourcePBImpl). Only FixedValueResource handle 
> long-to-int casting nicely. The other two didn't. This bug is introduced by 
> resource type feature and causes several unit test failures. For example:
> {code}
> Error Message
> expected:<> but was:<>
> Stacktrace
> java.lang.AssertionError: expected:<> but 
> was:<>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:144)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt.testHeadroomWithBlackListedNodes(TestFSAppAttempt.java:325)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-12 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202839#comment-16202839
 ] 

Vrushali C edited comment on YARN-7169 at 10/13/17 12:08 AM:
-

Also verified that data is being retrieved fine from timeline service v2 and 
being displayed fine by the UI. 

Attaching "FlowRunDetails_Sleepjob" screenshot. I ran the sleep job a few times 
and here you can see the different runs. 

Attaching "Metrics_Yarn_UI". Here you can see the metrics at the Flow Run level 
for the pi job I ran. 


was (Author: vrushalic):
Also verified that data is being retrieved fine from timeline service v2 and 
being displayed fine by the UI. 

Attaching "FlowRunDetails_Sleepjob" screenshot. I ran the sleep job a few times 
and here you can see the different runs. 

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: FlowRunDetails_Sleepjob.png, Metrics_Yarn_UI.png, 
> YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, 
> YARN-7169-YARN-5355_branch2.0003.patch, 
> YARN-7169-YARN-5355_branch2.0004.patch, ui_commits(1), yarn-ui-screenshot.png
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-12 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Attachment: Metrics_Yarn_UI.png

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: FlowRunDetails_Sleepjob.png, Metrics_Yarn_UI.png, 
> YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, 
> YARN-7169-YARN-5355_branch2.0003.patch, 
> YARN-7169-YARN-5355_branch2.0004.patch, ui_commits(1), yarn-ui-screenshot.png
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7311) TestRMWebServicesReservation doesn't really test fair scheduler

2017-10-12 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7311:
---
Issue Type: Sub-task  (was: Improvement)
Parent: YARN-2572

> TestRMWebServicesReservation doesn't really test fair scheduler
> ---
>
> Key: YARN-7311
> URL: https://issues.apache.org/jira/browse/YARN-7311
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, reservation system
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7311.001.patch, YARN-7311.WIP.patch
>
>
> YARN-4248 introduced the REST API for submit/update/delete Reservations. 
> Class {{TestRMWebServicesReservation}} intends to test both FS and CS. The 
> test cases designed for fair scheduler actually test capacity scheduler. The 
> following code in method {{configureServlets}} shows it sets the scheduler to 
> CS even test cases are for fair scheduler.
> {code}
>   conf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
>   ResourceScheduler.class);
>   CapacitySchedulerConfiguration csconf =
>   new CapacitySchedulerConfiguration(conf);
>   String[] queues = { "default", "dedicated" };
>   csconf.setQueues("root", queues);
>   csconf.setCapacity("root.default", 50.0f);
>   csconf.setCapacity("root.dedicated", 50.0f);
>   csconf.setReservable("root.dedicated", true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-12 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Attachment: FlowRunDetails_Sleepjob.png

Also verified that data is being retrieved fine from timeline service v2 and 
being displayed fine by the UI. 

Attaching "FlowRunDetaisl_Sleepjob" screenshot. I ran the sleep job a few times 
and here you can see the different runs. 

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: FlowRunDetails_Sleepjob.png, 
> YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, 
> YARN-7169-YARN-5355_branch2.0003.patch, 
> YARN-7169-YARN-5355_branch2.0004.patch, ui_commits(1), yarn-ui-screenshot.png
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-12 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202839#comment-16202839
 ] 

Vrushali C edited comment on YARN-7169 at 10/13/17 12:04 AM:
-

Also verified that data is being retrieved fine from timeline service v2 and 
being displayed fine by the UI. 

Attaching "FlowRunDetails_Sleepjob" screenshot. I ran the sleep job a few times 
and here you can see the different runs. 


was (Author: vrushalic):
Also verified that data is being retrieved fine from timeline service v2 and 
being displayed fine by the UI. 

Attaching "FlowRunDetaisl_Sleepjob" screenshot. I ran the sleep job a few times 
and here you can see the different runs. 

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: FlowRunDetails_Sleepjob.png, 
> YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, 
> YARN-7169-YARN-5355_branch2.0003.patch, 
> YARN-7169-YARN-5355_branch2.0004.patch, ui_commits(1), yarn-ui-screenshot.png
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4859) [Bug] Unable to submit a job to a reservation when using FairScheduler

2017-10-12 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202837#comment-16202837
 ] 

Yufei Gu edited comment on YARN-4859 at 10/13/17 12:03 AM:
---

[~subru], I found a bug in reservation system for FS and post patch in 
YARN-7311. The bug may prevent users from submit/update/list reservation for FS 
as long as they use short queue name. It is a small change. Could you please 
have a look at it? Is that the same issue you met?


was (Author: yufeigu):
[~subru], I found a bug in reservation system for FS and post patch in 
YARN-7311. It is a small change. Could you please have a look at it? Is that 
the same issue you met?

> [Bug] Unable to submit a job to a reservation when using FairScheduler
> --
>
> Key: YARN-4859
> URL: https://issues.apache.org/jira/browse/YARN-4859
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Subru Krishnan
>Assignee: Yufei Gu
>Priority: Blocker
>
> Jobs submitted to a reservation get stuck at scheduled stage when using 
> FairScheduler. I came across this when working on YARN-4827 (documentation 
> for configuring ReservationSystem for FairScheduler)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4859) [Bug] Unable to submit a job to a reservation when using FairScheduler

2017-10-12 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202837#comment-16202837
 ] 

Yufei Gu commented on YARN-4859:


[~subru], I found a bug in reservation system for FS and post patch in 
YARN-7311. It is a small change. Could you please have a look at it? Is that 
the same issue you met?

> [Bug] Unable to submit a job to a reservation when using FairScheduler
> --
>
> Key: YARN-4859
> URL: https://issues.apache.org/jira/browse/YARN-4859
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Subru Krishnan
>Assignee: Yufei Gu
>Priority: Blocker
>
> Jobs submitted to a reservation get stuck at scheduled stage when using 
> FairScheduler. I came across this when working on YARN-4827 (documentation 
> for configuring ReservationSystem for FairScheduler)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7311) TestRMWebServicesReservation doesn't really test fair scheduler

2017-10-12 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202833#comment-16202833
 ] 

Yufei Gu commented on YARN-7311:


Uploaded patch v1. What I did in it:
- Make tests really test for fair scheduler. 
- Fix the queue name bug in reservation system for fair scheduler.
- Fix the fair scheduler configuration issue in the test.

> TestRMWebServicesReservation doesn't really test fair scheduler
> ---
>
> Key: YARN-7311
> URL: https://issues.apache.org/jira/browse/YARN-7311
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, reservation system
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7311.001.patch, YARN-7311.WIP.patch
>
>
> YARN-4248 introduced the REST API for submit/update/delete Reservations. 
> Class {{TestRMWebServicesReservation}} intends to test both FS and CS. The 
> test cases designed for fair scheduler actually test capacity scheduler. The 
> following code in method {{configureServlets}} shows it sets the scheduler to 
> CS even test cases are for fair scheduler.
> {code}
>   conf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
>   ResourceScheduler.class);
>   CapacitySchedulerConfiguration csconf =
>   new CapacitySchedulerConfiguration(conf);
>   String[] queues = { "default", "dedicated" };
>   csconf.setQueues("root", queues);
>   csconf.setCapacity("root.default", 50.0f);
>   csconf.setCapacity("root.dedicated", 50.0f);
>   csconf.setReservable("root.dedicated", true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7198) Add jsvc support for RegistryDNS

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202823#comment-16202823
 ] 

Hadoop QA commented on YARN-7198:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
28s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
5s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
40s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 30s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 32s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 29s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
3s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| 

[jira] [Updated] (YARN-7311) TestRMWebServicesReservation doesn't really test fair scheduler

2017-10-12 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-7311:
---
Attachment: YARN-7311.001.patch

> TestRMWebServicesReservation doesn't really test fair scheduler
> ---
>
> Key: YARN-7311
> URL: https://issues.apache.org/jira/browse/YARN-7311
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, reservation system
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7311.001.patch, YARN-7311.WIP.patch
>
>
> YARN-4248 introduced the REST API for submit/update/delete Reservations. 
> Class {{TestRMWebServicesReservation}} intends to test both FS and CS. The 
> test cases designed for fair scheduler actually test capacity scheduler. The 
> following code in method {{configureServlets}} shows it sets the scheduler to 
> CS even test cases are for fair scheduler.
> {code}
>   conf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
>   ResourceScheduler.class);
>   CapacitySchedulerConfiguration csconf =
>   new CapacitySchedulerConfiguration(conf);
>   String[] queues = { "default", "dedicated" };
>   csconf.setQueues("root", queues);
>   csconf.setCapacity("root.default", 50.0f);
>   csconf.setCapacity("root.dedicated", 50.0f);
>   csconf.setReservable("root.dedicated", true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7246) Fix the default docker binary path

2017-10-12 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202815#comment-16202815
 ] 

Shane Kumpf commented on YARN-7246:
---

Thanks for the review [~jlowe]. Attaching a patch where the conf lookup happens 
in the function. The way conf files were handled in the tests was somewhat 
error prone, I tried to clean that up so that the test functions handle 
generating the config file they need. Let me know your thoughts.

> Fix the default docker binary path
> --
>
> Key: YARN-7246
> URL: https://issues.apache.org/jira/browse/YARN-7246
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Blocker
> Attachments: YARN-7246-branch-2.8.2.001.patch, 
> YARN-7246-branch-2.8.2.002.patch, YARN-7246-branch-2.8.2.003.patch, 
> YARN-7246-branch-2.8.2.004.patch, YARN-7246-branch-2.8.2.005.patch, 
> YARN-7246-branch-2.8.2.006.patch, YARN-7246-branch-2.8.2.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7311) TestRMWebServicesReservation doesn't really test fair scheduler

2017-10-12 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu reassigned YARN-7311:
--

Assignee: Yufei Gu

> TestRMWebServicesReservation doesn't really test fair scheduler
> ---
>
> Key: YARN-7311
> URL: https://issues.apache.org/jira/browse/YARN-7311
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: fairscheduler, reservation system
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-7311.WIP.patch
>
>
> YARN-4248 introduced the REST API for submit/update/delete Reservations. 
> Class {{TestRMWebServicesReservation}} intends to test both FS and CS. The 
> test cases designed for fair scheduler actually test capacity scheduler. The 
> following code in method {{configureServlets}} shows it sets the scheduler to 
> CS even test cases are for fair scheduler.
> {code}
>   conf.setClass(YarnConfiguration.RM_SCHEDULER, CapacityScheduler.class,
>   ResourceScheduler.class);
>   CapacitySchedulerConfiguration csconf =
>   new CapacitySchedulerConfiguration(conf);
>   String[] queues = { "default", "dedicated" };
>   csconf.setQueues("root", queues);
>   csconf.setCapacity("root.default", 50.0f);
>   csconf.setCapacity("root.dedicated", 50.0f);
>   csconf.setReservable("root.dedicated", true);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7246) Fix the default docker binary path

2017-10-12 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-7246:
--
Attachment: YARN-7246-branch-2.8.2.007.patch

> Fix the default docker binary path
> --
>
> Key: YARN-7246
> URL: https://issues.apache.org/jira/browse/YARN-7246
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Blocker
> Attachments: YARN-7246-branch-2.8.2.001.patch, 
> YARN-7246-branch-2.8.2.002.patch, YARN-7246-branch-2.8.2.003.patch, 
> YARN-7246-branch-2.8.2.004.patch, YARN-7246-branch-2.8.2.005.patch, 
> YARN-7246-branch-2.8.2.006.patch, YARN-7246-branch-2.8.2.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202807#comment-16202807
 ] 

Hadoop QA commented on YARN-6608:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 3s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
56s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
12s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
54s{color} | {color:green} root generated 0 new + 1440 unchanged - 5 fixed = 
1440 total (was 1445) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  1s{color} | {color:orange} root: The patch generated 33 new + 161 unchanged 
- 235 fixed = 194 total (was 396) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
28s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 21s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-rumen in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
59s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
35s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestIncreaseAllocationExpirer
 |
|   | 

[jira] [Comment Edited] (YARN-7198) Add jsvc support for RegistryDNS

2017-10-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202787#comment-16202787
 ] 

Allen Wittenauer edited comment on YARN-7198 at 10/12/17 11:22 PM:
---

Anyway, rebuilt and it started up. Also it didn't switch to yarn this time 
without the env var set, so I'm not sure what was going on there.

In any case, I'm +1 on this particular patch, pending Jenkins. 

Now some general bad news, not related to this patch:

Ran a few queries, but this one is a bit concerning:

{code}
root@ubuntu:/hadoop/logs# dig @localhost -p 54 .
;; Warning: query response not set

; <<>> DiG 9.10.3-P4-Ubuntu <<>> @localhost -p 54 .
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOTAUTH, id: 47794
;; flags: rd ad; QUERY: 0, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; Query time: 0 msec
;; SERVER: 127.0.0.1#54(127.0.0.1)
;; WHEN: Thu Oct 12 16:04:54 PDT 2017
;; MSG SIZE  rcvd: 12

root@ubuntu:/hadoop/logs# dig @localhost -p 54 axfr .
;; Connection to ::1#54(::1) for . failed: connection refused.
;; communications error to 127.0.0.1#54: end of file
root@ubuntu:/hadoop/logs# 
{code}

It looks like it effectively fails when asked about a root zone, which is bad.

It's also kind of interesting in what it does and doesn't log. Probably should 
be configured to rotate logs based on size not date.

The real showstopper though:  RegistryDNS basically eats a core.  It is running 
with 100% cpu utilization with and without jsvc. On my laptop, this is 
triggering my fan.


was (Author: aw):
Anyway, rebuilt and it started up. Also it didn't switch to yarn this time 
without the env var set, so I'm not sure what was going on there.

In any case, I'm +1 on this particular patch. 

Now some general bad news, not related to this patch:

Ran a few queries, but this one is a bit concerning:

{code}
root@ubuntu:/hadoop/logs# dig @localhost -p 54 .
;; Warning: query response not set

; <<>> DiG 9.10.3-P4-Ubuntu <<>> @localhost -p 54 .
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOTAUTH, id: 47794
;; flags: rd ad; QUERY: 0, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; Query time: 0 msec
;; SERVER: 127.0.0.1#54(127.0.0.1)
;; WHEN: Thu Oct 12 16:04:54 PDT 2017
;; MSG SIZE  rcvd: 12

root@ubuntu:/hadoop/logs# dig @localhost -p 54 axfr .
;; Connection to ::1#54(::1) for . failed: connection refused.
;; communications error to 127.0.0.1#54: end of file
root@ubuntu:/hadoop/logs# 
{code}

It looks like it effectively fails when asked about a root zone, which is bad.

It's also kind of interesting in what it does and doesn't log. Probably should 
be configured to rotate logs based on size not date.

The real showstopper though:  RegistryDNS basically eats a core.  It is running 
with 100% cpu utilization with and without jsvc. On my laptop, this is 
triggering my fan.

> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch, 
> YARN-7198-yarn-native-services.05.patch, 
> YARN-7198-yarn-native-services.06.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7198) Add jsvc support for RegistryDNS

2017-10-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202787#comment-16202787
 ] 

Allen Wittenauer commented on YARN-7198:


Anyway, rebuilt and it started up. Also it didn't switch to yarn this time 
without the env var set, so I'm not sure what was going on there.

In any case, I'm +1 on this particular patch. 

Now some general bad news, not related to this patch:

Ran a few queries, but this one is a bit concerning:

{code}
root@ubuntu:/hadoop/logs# dig @localhost -p 54 .
;; Warning: query response not set

; <<>> DiG 9.10.3-P4-Ubuntu <<>> @localhost -p 54 .
; (2 servers found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOTAUTH, id: 47794
;; flags: rd ad; QUERY: 0, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; Query time: 0 msec
;; SERVER: 127.0.0.1#54(127.0.0.1)
;; WHEN: Thu Oct 12 16:04:54 PDT 2017
;; MSG SIZE  rcvd: 12

root@ubuntu:/hadoop/logs# dig @localhost -p 54 axfr .
;; Connection to ::1#54(::1) for . failed: connection refused.
;; communications error to 127.0.0.1#54: end of file
root@ubuntu:/hadoop/logs# 
{code}

It looks like it effectively fails when asked about a root zone, which is bad.

It's also kind of interesting in what it does and doesn't log. Probably should 
be configured to rotate logs based on size not date.

The real showstopper though:  RegistryDNS basically eats a core.  It is running 
with 100% cpu utilization with and without jsvc. On my laptop, this is 
triggering my fan.

> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch, 
> YARN-7198-yarn-native-services.05.patch, 
> YARN-7198-yarn-native-services.06.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7320) Duplicate LiteralByteStrings in SystemCredentialsForAppsProto.credentialsForApp_

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202780#comment-16202780
 ] 

Hadoop QA commented on YARN-7320:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m  3s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 4 new + 28 unchanged - 1 fixed = 32 total (was 29) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 23s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
55s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7320 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891831/YARN-7320.01.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 119f6833806a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / e46d5bb |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/17899/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17899/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 

[jira] [Created] (YARN-7323) Some changes in service REST API

2017-10-12 Thread Jian He (JIRA)
Jian He created YARN-7323:
-

 Summary: Some changes in service REST API
 Key: YARN-7323
 URL: https://issues.apache.org/jira/browse/YARN-7323
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Jian He
Assignee: Jian He






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202742#comment-16202742
 ] 

Hadoop QA commented on YARN-7169:
-

(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-YARN-Build/17900/console in case of 
problems.


> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, 
> YARN-7169-YARN-5355_branch2.0003.patch, 
> YARN-7169-YARN-5355_branch2.0004.patch, ui_commits(1), yarn-ui-screenshot.png
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-12 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Attachment: YARN-7169-YARN-5355_branch2.0004.patch

Updating a rebased patch 004. 

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, 
> YARN-7169-YARN-5355_branch2.0003.patch, 
> YARN-7169-YARN-5355_branch2.0004.patch, ui_commits(1), yarn-ui-screenshot.png
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7320) Duplicate LiteralByteStrings in SystemCredentialsForAppsProto.credentialsForApp_

2017-10-12 Thread Misha Dmitriev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Misha Dmitriev updated YARN-7320:
-
Attachment: YARN-7320.01.patch

> Duplicate LiteralByteStrings in 
> SystemCredentialsForAppsProto.credentialsForApp_
> 
>
> Key: YARN-7320
> URL: https://issues.apache.org/jira/browse/YARN-7320
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
> Attachments: YARN-7320.01.patch
>
>
> Using jxray (www.jxray.com) I've analyzed several heap dumps from YARN 
> Resource Manager running in a big cluster. The tool uncovered several sources 
> of memory waste. One problem, which results in wasting more than a quarter of 
> all memory, is a large number of duplicate {{LiteralByteString}} objects 
> coming from the following reference chain:
> {code}
> 1,011,810K (26.9%): byte[]: 5416705 / 100% dup arrays (22108 unique)
> ↖com.google.protobuf.LiteralByteString.bytes
> ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$.credentialsForApp_
> ↖{j.u.ArrayList}
> ↖j.u.Collections$UnmodifiableRandomAccessList.c
> ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$NodeHeartbeatResponseProto.systemCredentialsForApps_
> ↖org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.NodeHeartbeatResponsePBImpl.proto
> ↖org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl.latestNodeHeartBeatResponse
> ↖org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.rmNode
> ...
> {code}
> That is, collectively reference chains that look as above hold in memory 5.4 
> million {{LiteralByteString}} objects, but only ~22 thousand of these objects 
> are unique. Deduplicating these objects, e.g. using a Google Object Interner 
> instance, would save ~1GB of memory.
> It looks like the main place where the above {{LiteralByteString}}s are 
> created and attached to the {{SystemCredentialsForAppsProto}} objects is in 
> {{NodeHeartbeatResponsePBImpl.java}}, method 
> {{addSystemCredentialsToProto()}}. Probably adding a call to an interner 
> there will fix the problem. wi 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7198) Add jsvc support for RegistryDNS

2017-10-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202732#comment-16202732
 ] 

Allen Wittenauer commented on YARN-7198:


I'm still playing with the last patch, but I'm very perplexed.

If I set

{code}
export YARN_REGISTRYDNS_SECURE_USER=yarn
{code}

in hadoop-env.sh/yarn-env.sh and then run:

{code}
yarn --daemon start registrydns
{code}

 the process breaks with 

{code}
java.lang.ClassNotFoundException: 
org.apache.hadoop.registry.server.dns.PrivilegedRegistryDNSStarter
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:151)
Cannot load daemon
Service exit with a return value of 3
{code}

That's indicative of either the classname being wrong or the jar files being or 
the class not being in the jar files or whatever.  A quick pass through the 
jars I'm using show it isn't in there.  I'll double check my build to make sure 
it's the correct one.  It's likely a local build problem, so whatever.

But if I don't set that (and therefore, don't get the jsvc behavior)

It comes up as yarn on port 54... which shouldn't work since 54 is a reserved 
port and the yarn user shouldn't have access to that port.  Very very curious.

> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch, 
> YARN-7198-yarn-native-services.05.patch, 
> YARN-7198-yarn-native-services.06.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-12 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Attachment: yarn-ui-screenshot.png

Attaching one screenshot of the new yarn ui that I took with a 
pseudo-distributed cluster setup on my local mac. 

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, 
> YARN-7169-YARN-5355_branch2.0003.patch, ui_commits(1), yarn-ui-screenshot.png
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7319) java.net.UnknownHostException when trying contact node by hostname

2017-10-12 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202708#comment-16202708
 ] 

Daryn Sharp commented on YARN-7319:
---

bq. java.lang.IllegalArgumentException: java.net.UnknownHostException: 
hadoop-slave-743067341-hqrbk

I'm a bit confused.  Why is the node resolving itself as 
"hadoop-slave-743067341-hqrbk"?  I believe that's the hostname self-reported 
during registration.  If this is truly an ip-only environment, presumably that 
means the junk hostname is only in that node's /etc/hosts, but not in 
/etc/hosts of the other nodes?  I understand not having reverse dns.  However 
not having forward dns but assigning a private hostname is a bit obtuse, might 
as well not let the host resolve itself if nobody else can resolve it...

Did you try setting {{hadoop.security.token.service.use_ip=false}} per the 
javadocs on buildTokenService?  That will get you past the exception while 
generating the container token.  It's likely the client won't be able to locate 
the token though – ie. token will have a host, but if the env is ip-only, the 
client must use an ip to connect and won't be able to match the ip with the 
hostname in the token.



> java.net.UnknownHostException when trying contact node by hostname
> --
>
> Key: YARN-7319
> URL: https://issues.apache.org/jira/browse/YARN-7319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Evgeny Makarov
>
> I'm trying to setup Hadoop on Kubernetes cluster with following setup:
> Hadoop master is k8s pod
> Each hadoop slave is additional k8s pod
> All communication is being processed on IP based manned. In HDFS I have 
> setting of dfs.namenode.datanode.registration.ip-hostname-check set to false 
> and all works fine, however same option missing for YARN manager. 
> Here part of hadoop-master log when trying to submit simple word-count job:
> 2017-10-12 09:00:25,005 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
>  Error trying to assign container token and NM token to an allocated 
> container container_1507798393049_0001_01_01
> java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> hadoop-slave-743067341-hqrbk
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
> at 
> org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerToken(BuilderUtils.java:258)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager.createContainerToken(RMContainerTokenSecretManager.java:220)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.pullNewlyAllocatedContainersAndNMTokens(SchedulerApplicationAttempt.java:454)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.getAllocation(FiCaSchedulerApp.java:269)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:988)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:971)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:964)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:789)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:105)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:795)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:776)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.UnknownHostException: hadoop-slave-743067341-hqrbk
> ... 19 more
> As can be seen, host hadoop-slave-743067341-hqrbk is unreachable. Adding 

[jira] [Commented] (YARN-7224) Support GPU isolation for docker container

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202698#comment-16202698
 ] 

Hadoop QA commented on YARN-7224:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
26s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 32 new + 379 unchanged - 11 fixed = 411 total (was 390) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m  1s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 31s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| 

[jira] [Assigned] (YARN-1014) Configure OOM Killer to kill OPPORTUNISTIC containers first

2017-10-12 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-1014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen reassigned YARN-1014:


Assignee: Miklos Szegedi  (was: Haibo Chen)

> Configure OOM Killer to kill OPPORTUNISTIC containers first
> ---
>
> Key: YARN-1014
> URL: https://issues.apache.org/jira/browse/YARN-1014
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha1
>Reporter: Arun C Murthy
>Assignee: Miklos Szegedi
> Attachments: YARN-1014.00.patch, YARN-1014.01.patch, 
> YARN-1014.02.patch
>
>
> YARN-2882 introduces the notion of OPPORTUNISTIC containers. These containers 
> should be killed first should the system run out of memory. 
> -
> Previous description:
> Once RM allocates 'speculative containers' we need to get LCE to schedule 
> them at lower priorities via cgroups.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7170) Investigate bower dependencies for YARN UI v2

2017-10-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202671#comment-16202671
 ] 

Allen Wittenauer commented on YARN-7170:


Using HADOOP-14945 (which fixes docker -i mode when GPG signing isn't 
required), I ran two builds on ASF Jenkins, each using this command line:

{code}
dev-support/bin/create-release --docker --native --dockercache
{code}

Once with plain trunk+14945 an once with the -02 patch.  My understanding is 
that bower and friends cache in the home directory. By running each build in 
separate Docker containers with their own home dirs and their own maven repo 
caches, nothing should get cached between the two builds.

As a result, the -02 patch cuts build time by  ~3 minutes.  Of course, the ASF 
also has a significantly faster network pipe than if you were building at home. 
Additionally the node I was running on wasn't doing much during the first run 
but got another job scheduled during the second run.  As a result, times here 
should be viewed as conservative.

It'd be great if someone else can confirm that upgrading the frontend plugin 
has a significant impact on the build time.

> Investigate bower dependencies for YARN UI v2
> -
>
> Key: YARN-7170
> URL: https://issues.apache.org/jira/browse/YARN-7170
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sunil G
>Priority: Critical
> Attachments: YARN-7170.001.patch, YARN-7170.002.patch
>
>
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  50% (38449/75444), 722.46 MiB | 3.30 MiB/s
> ...
> [INFO] bower ember#2.2.0   progress Receiving
> objects:  99% (75017/75444), 1.56 GiB | 3.31 MiB/s
> Investigate the dependencies and reduce the download size and speed of 
> compilation.
> cc/ [~Sreenath] and [~akhilpb]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202670#comment-16202670
 ] 

Hadoop QA commented on YARN-7169:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} YARN-7169 does not apply to YARN-5355_branch2. Rebase required? 
Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-7169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891821/YARN-7169-YARN-5355_branch2.0003.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17896/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, 
> YARN-7169-YARN-5355_branch2.0003.patch, ui_commits(1)
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7169) Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)

2017-10-12 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-7169:
-
Attachment: YARN-7169-YARN-5355_branch2.0003.patch

Uploading v003 with maven changes

> Backport new yarn-ui to branch2 code (starting with YARN-5355_branch2)
> --
>
> Key: YARN-7169
> URL: https://issues.apache.org/jira/browse/YARN-7169
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>Priority: Critical
> Attachments: YARN-7169-YARN-5355_branch2.0001.patch, 
> YARN-7169-YARN-5355_branch2.0002.patch, 
> YARN-7169-YARN-5355_branch2.0003.patch, ui_commits(1)
>
>
> Jira to track the backport of the new yarn-ui onto branch2. Right now adding 
> into Timeline Service v2's branch2 which is YARN-5355_branch2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7321) Backport container-executor changes from YARN-6852 to branch-2

2017-10-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202660#comment-16202660
 ] 

Wangda Tan commented on YARN-7321:
--

[~vvasudev], patch looks good to me. Could you verify if this works fine in a 
branch-2 deploy?

> Backport container-executor changes from YARN-6852 to branch-2
> --
>
> Key: YARN-7321
> URL: https://issues.apache.org/jira/browse/YARN-7321
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-7321-branch-2.001.patch
>
>
> YARN-6852 added support for GPUs to container-executor. It also re-factored 
> the container-executor code to add support for modules. The non-GPU changes 
> need to be backported to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7319) java.net.UnknownHostException when trying contact node by hostname

2017-10-12 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202642#comment-16202642
 ] 

Elek, Marton commented on YARN-7319:


I think it's a valid report even if it's not a bug but rather a 
improvements/feature request. I understand that the current Yarn couldn't be 
start without dns system, but it's a valid request to make it usable (at least 
without kerberos) in an ip only environment (such as kubernetes without 
statefulset). It's not just about kubernetes but there are cloud providers 
where the dns (or at least the reverse dns) is not guaranteed. Therefor a 
setting to use ip only yarn without dns/reversedns would help to use Hadoop in 
these environments (even if there are limitations of this approach).  

> java.net.UnknownHostException when trying contact node by hostname
> --
>
> Key: YARN-7319
> URL: https://issues.apache.org/jira/browse/YARN-7319
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Evgeny Makarov
>
> I'm trying to setup Hadoop on Kubernetes cluster with following setup:
> Hadoop master is k8s pod
> Each hadoop slave is additional k8s pod
> All communication is being processed on IP based manned. In HDFS I have 
> setting of dfs.namenode.datanode.registration.ip-hostname-check set to false 
> and all works fine, however same option missing for YARN manager. 
> Here part of hadoop-master log when trying to submit simple word-count job:
> 2017-10-12 09:00:25,005 ERROR 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt:
>  Error trying to assign container token and NM token to an allocated 
> container container_1507798393049_0001_01_01
> java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> hadoop-slave-743067341-hqrbk
> at 
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:377)
> at 
> org.apache.hadoop.yarn.server.utils.BuilderUtils.newContainerToken(BuilderUtils.java:258)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.security.RMContainerTokenSecretManager.createContainerToken(RMContainerTokenSecretManager.java:220)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerApplicationAttempt.pullNewlyAllocatedContainersAndNMTokens(SchedulerApplicationAttempt.java:454)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.common.fica.FiCaSchedulerApp.getAllocation(FiCaSchedulerApp.java:269)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.allocate(CapacityScheduler.java:988)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:971)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl$AMContainerAllocatedTransition.transition(RMAppAttemptImpl.java:964)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$MultipleInternalArc.doTransition(StateMachineFactory.java:385)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:302)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46)
> at 
> org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:789)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:105)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:795)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:776)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:183)
> at 
> org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:109)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.net.UnknownHostException: hadoop-slave-743067341-hqrbk
> ... 19 more
> As can be seen, host hadoop-slave-743067341-hqrbk is unreachable. Adding 
> record to /ets/hosts of master will solve the problem, however its not an 
> option in Kubernetes environment. There is should be a way to resolve nodes 
> by IP address



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Comment Edited] (YARN-7198) Add jsvc support for RegistryDNS

2017-10-12 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202614#comment-16202614
 ] 

Jian He edited comment on YARN-7198 at 10/12/17 9:01 PM:
-

bq.  please link "YARN Registry" in the beginning of the document to the YARN 
registry documentation.
Link added in the the beginning of RegistryDNS.md
bq. let's fix the YARN registry documentation to explicitly say that a separate 
zookeeper instance is required.
Clarified in the index.md of yarn registry documentation 
bq. the zk quorum info in the registrydns docs contradict what is in the YARN 
registry documentation. this clearly needs to get rectified.
It should be "A comma separated list of hostname:port pairs defining the 
zookeeper quorum". I changed the wording to be the same. 




was (Author: jianhe):
bq.  please link "YARN Registry" in the beginning of the document to the YARN 
registry documentation.
Link added in the the begging of RegistryDNS.md
bq. let's fix the YARN registry documentation to explicitly say that a separate 
zookeeper instance is required.
Clarified in the index.md of yarn registry documentation 
bq. the zk quorum info in the registrydns docs contradict what is in the YARN 
registry documentation. this clearly needs to get rectified.
It should be "A comma separated list of hostname:port pairs defining the 
zookeeper quorum". I changed the wording to be the same. 



> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch, 
> YARN-7198-yarn-native-services.05.patch, 
> YARN-7198-yarn-native-services.06.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7198) Add jsvc support for RegistryDNS

2017-10-12 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202614#comment-16202614
 ] 

Jian He commented on YARN-7198:
---

bq.  please link "YARN Registry" in the beginning of the document to the YARN 
registry documentation.
Link added in the the begging of RegistryDNS.md
bq. let's fix the YARN registry documentation to explicitly say that a separate 
zookeeper instance is required.
Clarified in the index.md of yarn registry documentation 
bq. the zk quorum info in the registrydns docs contradict what is in the YARN 
registry documentation. this clearly needs to get rectified.
It should be "A comma separated list of hostname:port pairs defining the 
zookeeper quorum". I changed the wording to be the same. 



> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch, 
> YARN-7198-yarn-native-services.05.patch, 
> YARN-7198-yarn-native-services.06.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7198) Add jsvc support for RegistryDNS

2017-10-12 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-7198:
--
Attachment: YARN-7198-yarn-native-services.06.patch

> Add jsvc support for RegistryDNS
> 
>
> Key: YARN-7198
> URL: https://issues.apache.org/jira/browse/YARN-7198
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn-native-services
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Critical
> Attachments: YARN-7198-yarn-native-services.01.patch, 
> YARN-7198-yarn-native-services.02.patch, 
> YARN-7198-yarn-native-services.03.patch, 
> YARN-7198-yarn-native-services.04.patch, 
> YARN-7198-yarn-native-services.05.patch, 
> YARN-7198-yarn-native-services.06.patch
>
>
> RegistryDNS should have jsvc support and be managed through the shell 
> scripts, rather than being started manually. See original comments on 
> YARN-7191.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-10-12 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202601#comment-16202601
 ] 

Jason Lowe commented on YARN-7190:
--

Patch looks good overall, works as advertised.

It would be good to fix the shellcheck warnings.  $\{YARN_DIR\} can just be 
$YARN_DIR.

Has anyone run TestTimelineReaderWebServices with JDK7?  That's the only JDK 
that is timing out the test.  Given it has happened twice now, I don't think it 
was a fluke.  Have there been other YARN-5355_branch2 precommit builds that 
have complained about this test?  That could help pinpoint if it's failure is 
tied to these changes.  Also the ASF warnings were a result of hs_err files 
being dropped, possibly by some unit test.  Not sure if people are seeing that 
in other precommits as well.


> Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user 
> classpath
> 
>
> Key: YARN-7190
> URL: https://issues.apache.org/jira/browse/YARN-7190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
> Attachments: YARN-7190-YARN-5355_branch2.01.patch
>
>
> [~jlowe] had a good observation about the user classpath getting extra jars 
> in hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
> version of HBase jars instead of the ones they shipped with their job, it 
> could be a problem.
> So when TSv2 is to be used in 2,x, the hbase related jars should come into 
> only the NM classpath not the user classpath.
> Here is a list of some jars
> {code}
> commons-csv-1.0.jar
> commons-el-1.0.jar
> commons-httpclient-3.1.jar
> disruptor-3.3.0.jar
> findbugs-annotations-1.3.9-1.jar
> hbase-annotations-1.2.6.jar
> hbase-client-1.2.6.jar
> hbase-common-1.2.6.jar
> hbase-hadoop2-compat-1.2.6.jar
> hbase-hadoop-compat-1.2.6.jar
> hbase-prefix-tree-1.2.6.jar
> hbase-procedure-1.2.6.jar
> hbase-protocol-1.2.6.jar
> hbase-server-1.2.6.jar
> htrace-core-3.1.0-incubating.jar
> jamon-runtime-2.4.1.jar
> jasper-compiler-5.5.23.jar
> jasper-runtime-5.5.23.jar
> jcodings-1.0.8.jar
> joni-2.1.2.jar
> jsp-2.1-6.1.14.jar
> jsp-api-2.1-6.1.14.jar
> jsr311-api-1.1.1.jar
> metrics-core-2.2.0.jar
> servlet-api-2.5-6.1.14.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-12 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6608:
-
Attachment: YARN-6608-branch-2.v8.patch

[~aw] you're right, they're not work.

I just reverted this part of change, manually added change to old script, and 
tried to run them locally, now they work fine. [~curino], could you help to 
verify the same?

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch, 
> YARN-6608-branch-2.v5.patch, YARN-6608-branch-2.v6.patch, 
> YARN-6608-branch-2.v7.patch, YARN-6608-branch-2.v8.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7244) ShuffleHandler is not aware of disks that are added

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202595#comment-16202595
 ] 

Hadoop QA commented on YARN-7244:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 35s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 58s{color} | {color:orange} root: The patch generated 43 new + 348 unchanged 
- 2 fixed = 391 total (was 350) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 48s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
9s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 41s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  2s{color} 
| {color:red} hadoop-mapreduce-client-shuffle in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 2s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}160m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler |
|   | hadoop.mapred.TestShuffleHandler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7244 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891761/YARN-7244.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8bd46ea15b5d 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-12 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202591#comment-16202591
 ] 

Carlo Curino commented on YARN-6608:


I agree with Allen, the script are unlikely to work, I think we need at the 
very least pull in the {{hadoop-functions.sh}} support file that defines the 
hadoop_add_param etc... looking now..

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch, 
> YARN-6608-branch-2.v5.patch, YARN-6608-branch-2.v6.patch, 
> YARN-6608-branch-2.v7.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202588#comment-16202588
 ] 

Hadoop QA commented on YARN-7190:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} YARN-5355_branch2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
42s{color} | {color:green} YARN-5355_branch2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m  
9s{color} | {color:green} YARN-5355_branch2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
2s{color} | {color:green} YARN-5355_branch2 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
36s{color} | {color:green} YARN-5355_branch2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
30s{color} | {color:green} YARN-5355_branch2 passed with JDK v1.8.0_144 {color} 
|
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
13s{color} | {color:green} YARN-5355_branch2 passed with JDK v1.7.0_151 {color} 
|
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 6 new + 16 unchanged - 0 fixed = 22 
total (was 16) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
8s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
34s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
13s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
13s{color} | {color:green} hadoop-assemblies in the patch passed with JDK 
v1.7.0_151. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 22s{color} 
| {color:red} hadoop-yarn in the patch failed with JDK v1.7.0_151. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 37s{color} 
| {color:red} hadoop-yarn-server-timelineservice in the patch failed with JDK 
v1.7.0_151. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m  3s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_151. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed with JDK 

[jira] [Commented] (YARN-7159) Normalize unit of resource objects in RM and avoid to do unit conversion in critical path

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202506#comment-16202506
 ] 

Hadoop QA commented on YARN-7159:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 25s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 54s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 13 unchanged - 0 fixed = 17 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  1s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
14s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
0s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 71m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
|  |  Comparison of String objects using == or != in 
org.apache.hadoop.yarn.api.records.ResourceInformation.copy(ResourceInformation,
 ResourceInformation)   At ResourceInformation.java:== or != in 
org.apache.hadoop.yarn.api.records.ResourceInformation.copy(ResourceInformation,
 ResourceInformation)   At ResourceInformation.java:[line 239] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:3d04c00 |
| JIRA Issue | YARN-7159 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891770/YARN-7159.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-12 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202485#comment-16202485
 ] 

Allen Wittenauer commented on YARN-6608:


Umm, have you folks actually tried using those shell scripts? 

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch, 
> YARN-6608-branch-2.v5.patch, YARN-6608-branch-2.v6.patch, 
> YARN-6608-branch-2.v7.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202480#comment-16202480
 ] 

Wangda Tan commented on YARN-6608:
--

[~curino], 

The latest Jenkins result looks good, shell check is tracked by: YARN-7318. 
Could you help to commit the patch?

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch, 
> YARN-6608-branch-2.v5.patch, YARN-6608-branch-2.v6.patch, 
> YARN-6608-branch-2.v7.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202465#comment-16202465
 ] 

Hadoop QA commented on YARN-6608:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
46s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
35s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m  
7s{color} | {color:green} root generated 0 new + 1443 unchanged - 5 fixed = 
1443 total (was 1448) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 44s{color} | {color:orange} root: The patch generated 33 new + 161 unchanged 
- 235 fixed = 194 total (was 396) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
0s{color} | {color:red} The patch generated 2 new + 2 unchanged - 22 fixed = 4 
total (was 24) {color} |
| {color:orange}-0{color} | {color:orange} shelldocs {color} | {color:orange}  
0m  8s{color} | {color:orange} The patch generated 16 new + 47 unchanged - 0 
fixed = 63 total (was 47) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
24s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 52m  
6s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-rumen in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
2s{color} | {color:green} hadoop-sls in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 

[jira] [Updated] (YARN-7224) Support GPU isolation for docker container

2017-10-12 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-7224:
-
Attachment: YARN-7224.004.patch

Attached ver.4 patch, fixed warnings / test failures and added more preventive 
tests.

> Support GPU isolation for docker container
> --
>
> Key: YARN-7224
> URL: https://issues.apache.org/jira/browse/YARN-7224
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Attachments: YARN-7224.001.patch, YARN-7224.002-wip.patch, 
> YARN-7224.003.patch, YARN-7224.004.patch
>
>
> YARN-6620 added support of GPU isolation in NM side, which only supports 
> non-docker containers. We need to add support to help docker containers 
> launched by YARN can utilize GPUs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7310) TestAMRMProxy#testAMRMProxyE2E fails with FairScheduler

2017-10-12 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202459#comment-16202459
 ] 

Robert Kanter commented on YARN-7310:
-

This is blocked on YARN-7270.  It's causing an overflow, making the max vcores 
in the queue be -1.

> TestAMRMProxy#testAMRMProxyE2E fails with FairScheduler
> ---
>
> Key: YARN-7310
> URL: https://issues.apache.org/jira/browse/YARN-7310
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7310.001.patch
>
>
> {{TestAMRMProxy#testAMRMProxyE2E}} fails with FairScheduler:
> {noformat}
> [ERROR] 
> testAMRMProxyE2E(org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy)  Time 
> elapsed: 29.047 s  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy.testAMRMProxyE2E(TestAMRMProxy.java:124)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-7310) TestAMRMProxy#testAMRMProxyE2E fails with FairScheduler

2017-10-12 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202459#comment-16202459
 ] 

Robert Kanter edited comment on YARN-7310 at 10/12/17 6:52 PM:
---

This is blocked on YARN-7270.  It's causing an overflow, making the max vcores 
in the queue -1.


was (Author: rkanter):
This is blocked on YARN-7270.  It's causing an overflow, making the max vcores 
in the queue be -1.

> TestAMRMProxy#testAMRMProxyE2E fails with FairScheduler
> ---
>
> Key: YARN-7310
> URL: https://issues.apache.org/jira/browse/YARN-7310
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: test
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: YARN-7310.001.patch
>
>
> {{TestAMRMProxy#testAMRMProxyE2E}} fails with FairScheduler:
> {noformat}
> [ERROR] 
> testAMRMProxyE2E(org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy)  Time 
> elapsed: 29.047 s  <<< FAILURE!
> java.lang.AssertionError: expected:<2> but was:<1>
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMProxy.testAMRMProxyE2E(TestAMRMProxy.java:124)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7254) UI and metrics changes related to absolute resource configuration

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202438#comment-16202438
 ] 

Hadoop QA commented on YARN-7254:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5881 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
27s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
38s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 5s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 36s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} YARN-5881 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} YARN-5881 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  4s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 7 new + 646 unchanged - 6 fixed = 653 total (was 652) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 16s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}129m 31s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.ahs.TestRMApplicationHistoryWriter |
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
|   | hadoop.yarn.server.resourcemanager.TestApplicationMasterService |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerWithMultiResourceTypes
 |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppAttempt |
|   | 

[jira] [Updated] (YARN-7322) Remove annotations from org.apache.hadoop.yarn.server classes

2017-10-12 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-7322:
-
Labels: newbie  (was: )

> Remove annotations from org.apache.hadoop.yarn.server classes
> -
>
> Key: YARN-7322
> URL: https://issues.apache.org/jira/browse/YARN-7322
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>  Labels: newbie
>
> The main hadoop pom.xml has this section in the javadoc plugin:
> {noformat}
> org.apache.hadoop.authentication*,org.apache.hadoop.mapreduce.v2.proto,org.apache.hadoop.yarn.proto,org.apache.hadoop.yarn.server*,org.apache.hadoop.yarn.webapp*
> {noformat}
> Since the package org.apache.hadoop.yarn.server is ignored, the various @ 
> annotations should be removed from those classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6953) Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and setMaximumAllocationForMandatoryResources()

2017-10-12 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202415#comment-16202415
 ] 

Manikandan R commented on YARN-6953:


Can we move ahead if patch is good?

> Clean up ResourceUtils.setMinimumAllocationForMandatoryResources() and 
> setMaximumAllocationForMandatoryResources()
> --
>
> Key: YARN-6953
> URL: https://issues.apache.org/jira/browse/YARN-6953
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Manikandan R
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-6953-YARN-3926-WIP.patch, 
> YARN-6953-YARN-3926.001.patch, YARN-6953-YARN-3926.002.patch, 
> YARN-6953-YARN-3926.003.patch, YARN-6953-YARN-3926.004.patch, 
> YARN-6953-YARN-3926.005.patch, YARN-6953-YARN-3926.006.patch, 
> YARN-6953.007.patch, YARN-6953.008.patch
>
>
> The {{setMinimumAllocationForMandatoryResources()}} and 
> {{setMaximumAllocationForMandatoryResources()}} methods are quite convoluted. 
>  They'd be much simpler if they just handled CPU and memory manually instead 
> of trying to be clever about doing it in a loop.  There are also issues, such 
> as the log warning always talking about memory or the last element of the 
> inner array being a copy of the first element.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7138) Fix incompatible API change for YarnScheduler involved by YARN-5221

2017-10-12 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202407#comment-16202407
 ] 

Ray Chiang commented on YARN-7138:
--

Thanks.  Filed YARN-7322.

> Fix incompatible API change for YarnScheduler involved by YARN-5221
> ---
>
> Key: YARN-7138
> URL: https://issues.apache.org/jira/browse/YARN-7138
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Junping Du
>Priority: Critical
>
> From JACC report for 2.8.2 against 2.7.4, it indicates that we have 
> incompatible changes happen in YarnScheduler:
> {noformat}
> hadoop-yarn-server-resourcemanager-2.7.4.jar, YarnScheduler.class
> package org.apache.hadoop.yarn.server.resourcemanager.scheduler
> YarnScheduler.allocate ( ApplicationAttemptId p1, List p2, 
> List p3, List p4, List p5 ) [abstract]  :  
> Allocation 
> {noformat}
> The root cause is YARN-5221. We should change it back or workaround this by 
> adding back original API (mark as deprecated if not used any more).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7321) Backport container-executor changes from YARN-6852 to branch-2

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202405#comment-16202405
 ] 

Hadoop QA commented on YARN-7321:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
30s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 14m 21s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 33s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | YARN-7321 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12891759/YARN-7321-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  |
| uname | Linux f43d200fdaa6 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / b9426c0 |
| Default Java | 1.7.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/17891/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/17891/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/17891/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Backport container-executor changes from YARN-6852 to branch-2
> --
>
> Key: YARN-7321
> URL: https://issues.apache.org/jira/browse/YARN-7321
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-7321-branch-2.001.patch
>
>
> YARN-6852 added support for GPUs to container-executor. It also re-factored 
> the container-executor code to add support for modules. The non-GPU changes 
> need to be backported to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7322) Remove annotations from org.apache.hadoop.yarn.server classes

2017-10-12 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202406#comment-16202406
 ] 

Ray Chiang commented on YARN-7322:
--

Link to YARN-7138.  This JIRA came about from a discussion there.

> Remove annotations from org.apache.hadoop.yarn.server classes
> -
>
> Key: YARN-7322
> URL: https://issues.apache.org/jira/browse/YARN-7322
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Minor
>
> The main hadoop pom.xml has this section in the javadoc plugin:
> {noformat}
> org.apache.hadoop.authentication*,org.apache.hadoop.mapreduce.v2.proto,org.apache.hadoop.yarn.proto,org.apache.hadoop.yarn.server*,org.apache.hadoop.yarn.webapp*
> {noformat}
> Since the package org.apache.hadoop.yarn.server is ignored, the various @ 
> annotations should be removed from those classes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7322) Remove annotations from org.apache.hadoop.yarn.server classes

2017-10-12 Thread Ray Chiang (JIRA)
Ray Chiang created YARN-7322:


 Summary: Remove annotations from org.apache.hadoop.yarn.server 
classes
 Key: YARN-7322
 URL: https://issues.apache.org/jira/browse/YARN-7322
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Affects Versions: 3.0.0-beta1
Reporter: Ray Chiang
Assignee: Ray Chiang
Priority: Minor


The main hadoop pom.xml has this section in the javadoc plugin:

{noformat}
org.apache.hadoop.authentication*,org.apache.hadoop.mapreduce.v2.proto,org.apache.hadoop.yarn.proto,org.apache.hadoop.yarn.server*,org.apache.hadoop.yarn.webapp*
{noformat}

Since the package org.apache.hadoop.yarn.server is ignored, the various @ 
annotations should be removed from those classes.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7159) Normalize unit of resource objects in RM and avoid to do unit conversion in critical path

2017-10-12 Thread Manikandan R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202394#comment-16202394
 ] 

Manikandan R commented on YARN-7159:


[~wangda] [~sunilg] Thanks for giving me an opportunity to contribute to this 
JIRA.

Had a offline discussion with [~sunilg] on the approaches. Attached a patch to 
convert resource value if its units is different from the corresponding 
resource units at RM side as part of resource object creation so that units 
conversions happening at several other places can be removed to improve 
performance.

> Normalize unit of resource objects in RM and avoid to do unit conversion in 
> critical path
> -
>
> Key: YARN-7159
> URL: https://issues.apache.org/jira/browse/YARN-7159
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7159.001.patch
>
>
> Currently resource conversion could happen in critical code path when 
> different unit is specified by client. This could impact performance and 
> throughput of RM a lot. We should do unit normalization when resource passed 
> to RM and avoid expensive unit conversion every time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7159) Normalize unit of resource objects in RM and avoid to do unit conversion in critical path

2017-10-12 Thread Manikandan R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikandan R updated YARN-7159:
---
Attachment: YARN-7159.001.patch

> Normalize unit of resource objects in RM and avoid to do unit conversion in 
> critical path
> -
>
> Key: YARN-7159
> URL: https://issues.apache.org/jira/browse/YARN-7159
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Wangda Tan
>Assignee: Wangda Tan
>Priority: Critical
> Attachments: YARN-7159.001.patch
>
>
> Currently resource conversion could happen in critical code path when 
> different unit is specified by client. This could impact performance and 
> throughput of RM a lot. We should do unit normalization when resource passed 
> to RM and avoid expensive unit conversion every time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) Add UT for api-server

2017-10-12 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202372#comment-16202372
 ] 

Jian He commented on YARN-7202:
---

Btw, before commit, I reverted the pom.xml changes in hadoop-project module and 
removed the dummy yarn-site.xml, as I think they are also not required for this 
patch.


> Add UT for api-server
> -
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Fix For: yarn-native-services
>
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch, 
> YARN-7202.yarn-native-services.004.patch, 
> YARN-7202.yarn-native-services.005.patch, 
> YARN-7202.yarn-native-services.006.patch, 
> YARN-7202.yarn-native-services.007.patch, 
> YARN-7202.yarn-native-services.008.patch, 
> YARN-7202.yarn-native-services.011.patch, 
> YARN-7202.yarn-native-services.012.patch, 
> YARN-7202.yarn-native-services.013.patch, 
> YARN-7202.yarn-native-services.014.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6690) Consolidate NM overallocation thresholds with ResourceTypes

2017-10-12 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6690:
-
Description: 
YARN-3926 (ResourceTypes) introduces a new class  ResourceInformation to 
encapsulate all information about a given resource type (e.g. type, value, 
unit). We could add the overallocation thresholds to it as well.

Another thing to look at, as suggested by Wangda in YARN-4511 is whether we 
could just use ResourceThresholds to replace OverallocationInfo.

  was:YARN-3926 (ResourceTypes) introduces a new class  ResourceInformation to 
encapsulate all information about a given resource type (e.g. type, value, 
unit). We could add the overallocation thresholds to it as well.


> Consolidate  NM overallocation thresholds with ResourceTypes
> 
>
> Key: YARN-6690
> URL: https://issues.apache.org/jira/browse/YARN-6690
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> YARN-3926 (ResourceTypes) introduces a new class  ResourceInformation to 
> encapsulate all information about a given resource type (e.g. type, value, 
> unit). We could add the overallocation thresholds to it as well.
> Another thing to look at, as suggested by Wangda in YARN-4511 is whether we 
> could just use ResourceThresholds to replace OverallocationInfo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7138) Fix incompatible API change for YarnScheduler involved by YARN-5221

2017-10-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202358#comment-16202358
 ] 

Wangda Tan commented on YARN-7138:
--

[~rchiang], I'm fine with doing that.

> Fix incompatible API change for YarnScheduler involved by YARN-5221
> ---
>
> Key: YARN-7138
> URL: https://issues.apache.org/jira/browse/YARN-7138
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Junping Du
>Priority: Critical
>
> From JACC report for 2.8.2 against 2.7.4, it indicates that we have 
> incompatible changes happen in YarnScheduler:
> {noformat}
> hadoop-yarn-server-resourcemanager-2.7.4.jar, YarnScheduler.class
> package org.apache.hadoop.yarn.server.resourcemanager.scheduler
> YarnScheduler.allocate ( ApplicationAttemptId p1, List p2, 
> List p3, List p4, List p5 ) [abstract]  :  
> Allocation 
> {noformat}
> The root cause is YARN-5221. We should change it back or workaround this by 
> adding back original API (mark as deprecated if not used any more).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7190) Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user classpath

2017-10-12 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202354#comment-16202354
 ] 

Vrushali C commented on YARN-7190:
--

Hi [~jlowe] 

We were wondering if you have any thoughts/suggestions on this proposed patch. 
Would appreciate your review. 

thanks
Vrushali


> Ensure only NM classpath in 2.x gets TSv2 related hbase jars, not the user 
> classpath
> 
>
> Key: YARN-7190
> URL: https://issues.apache.org/jira/browse/YARN-7190
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineclient, timelinereader, timelineserver
>Reporter: Vrushali C
>Assignee: Varun Saxena
> Attachments: YARN-7190-YARN-5355_branch2.01.patch
>
>
> [~jlowe] had a good observation about the user classpath getting extra jars 
> in hadoop 2.x brought in with TSv2.  If users start picking up Hadoop 2,x's 
> version of HBase jars instead of the ones they shipped with their job, it 
> could be a problem.
> So when TSv2 is to be used in 2,x, the hbase related jars should come into 
> only the NM classpath not the user classpath.
> Here is a list of some jars
> {code}
> commons-csv-1.0.jar
> commons-el-1.0.jar
> commons-httpclient-3.1.jar
> disruptor-3.3.0.jar
> findbugs-annotations-1.3.9-1.jar
> hbase-annotations-1.2.6.jar
> hbase-client-1.2.6.jar
> hbase-common-1.2.6.jar
> hbase-hadoop2-compat-1.2.6.jar
> hbase-hadoop-compat-1.2.6.jar
> hbase-prefix-tree-1.2.6.jar
> hbase-procedure-1.2.6.jar
> hbase-protocol-1.2.6.jar
> hbase-server-1.2.6.jar
> htrace-core-3.1.0-incubating.jar
> jamon-runtime-2.4.1.jar
> jasper-compiler-5.5.23.jar
> jasper-runtime-5.5.23.jar
> jcodings-1.0.8.jar
> joni-2.1.2.jar
> jsp-2.1-6.1.14.jar
> jsp-api-2.1-6.1.14.jar
> jsr311-api-1.1.1.jar
> metrics-core-2.2.0.jar
> servlet-api-2.5-6.1.14.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7138) Fix incompatible API change for YarnScheduler involved by YARN-5221

2017-10-12 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202352#comment-16202352
 ] 

Ray Chiang commented on YARN-7138:
--

Given that, then does it actually make sense to have such annotations on 
classes like YarnScheduler?  Would it be better to remove all such annotations 
then?

> Fix incompatible API change for YarnScheduler involved by YARN-5221
> ---
>
> Key: YARN-7138
> URL: https://issues.apache.org/jira/browse/YARN-7138
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: scheduler
>Reporter: Junping Du
>Priority: Critical
>
> From JACC report for 2.8.2 against 2.7.4, it indicates that we have 
> incompatible changes happen in YarnScheduler:
> {noformat}
> hadoop-yarn-server-resourcemanager-2.7.4.jar, YarnScheduler.class
> package org.apache.hadoop.yarn.server.resourcemanager.scheduler
> YarnScheduler.allocate ( ApplicationAttemptId p1, List p2, 
> List p3, List p4, List p5 ) [abstract]  :  
> Allocation 
> {noformat}
> The root cause is YARN-5221. We should change it back or workaround this by 
> adding back original API (mark as deprecated if not used any more).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-12 Thread Botong Huang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202349#comment-16202349
 ] 

Botong Huang commented on YARN-7317:


Great, thanks [~curino]!

> Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7317
> URL: https://issues.apache.org/jira/browse/YARN-7317
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-7317.v1.patch, YARN-7317.v2.patch, 
> YARN-7317.v3.patch, YARN-7317.v4.patch
>
>
> When LocalityMulticastAMRMProxyPolicy is splitting up the ANY requests into 
> different subclusters, we are doing Ceil(N * weight), leading to 
> overallocation overall. It is better to do Floor(N * weight) for each 
> subcluster and then assign the residue randomly according to the weights. So 
> that the total number of containers we ask from all subclusters sum up to be 
> N. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7244) ShuffleHandler is not aware of disks that are added

2017-10-12 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-7244:
--
Attachment: YARN-7244.005.patch

Fixing TestShuffleHandler failures. The TestDistributedScheduler failure is 
documented in YARN-7299.

> ShuffleHandler is not aware of disks that are added
> ---
>
> Key: YARN-7244
> URL: https://issues.apache.org/jira/browse/YARN-7244
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-7244.001.patch, YARN-7244.002.patch, 
> YARN-7244.003.patch, YARN-7244.004.patch, YARN-7244.005.patch
>
>
> The ShuffleHandler permanently remembers the list of "good" disks on NM 
> startup. If disks later are added to the node then map tasks will start using 
> them but the ShuffleHandler will not be aware of them. The end result is that 
> the data cannot be shuffled from the node leading to fetch failures and 
> re-runs of the map tasks.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7320) Duplicate LiteralByteStrings in SystemCredentialsForAppsProto.credentialsForApp_

2017-10-12 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter reassigned YARN-7320:
---

Assignee: Misha Dmitriev

> Duplicate LiteralByteStrings in 
> SystemCredentialsForAppsProto.credentialsForApp_
> 
>
> Key: YARN-7320
> URL: https://issues.apache.org/jira/browse/YARN-7320
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Misha Dmitriev
>Assignee: Misha Dmitriev
>
> Using jxray (www.jxray.com) I've analyzed several heap dumps from YARN 
> Resource Manager running in a big cluster. The tool uncovered several sources 
> of memory waste. One problem, which results in wasting more than a quarter of 
> all memory, is a large number of duplicate {{LiteralByteString}} objects 
> coming from the following reference chain:
> {code}
> 1,011,810K (26.9%): byte[]: 5416705 / 100% dup arrays (22108 unique)
> ↖com.google.protobuf.LiteralByteString.bytes
> ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$.credentialsForApp_
> ↖{j.u.ArrayList}
> ↖j.u.Collections$UnmodifiableRandomAccessList.c
> ↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$NodeHeartbeatResponseProto.systemCredentialsForApps_
> ↖org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.NodeHeartbeatResponsePBImpl.proto
> ↖org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl.latestNodeHeartBeatResponse
> ↖org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.rmNode
> ...
> {code}
> That is, collectively reference chains that look as above hold in memory 5.4 
> million {{LiteralByteString}} objects, but only ~22 thousand of these objects 
> are unique. Deduplicating these objects, e.g. using a Google Object Interner 
> instance, would save ~1GB of memory.
> It looks like the main place where the above {{LiteralByteString}}s are 
> created and attached to the {{SystemCredentialsForAppsProto}} objects is in 
> {{NodeHeartbeatResponsePBImpl.java}}, method 
> {{addSystemCredentialsToProto()}}. Probably adding a call to an interner 
> there will fix the problem. wi 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7321) Backport container-executor changes from YARN-6852 to branch-2

2017-10-12 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202339#comment-16202339
 ] 

Varun Vasudev commented on YARN-7321:
-

[~wangda] - can you please take a look? Thanks!

> Backport container-executor changes from YARN-6852 to branch-2
> --
>
> Key: YARN-7321
> URL: https://issues.apache.org/jira/browse/YARN-7321
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-7321-branch-2.001.patch
>
>
> YARN-6852 added support for GPUs to container-executor. It also re-factored 
> the container-executor code to add support for modules. The non-GPU changes 
> need to be backported to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7321) Backport container-executor changes from YARN-6852 to branch-2

2017-10-12 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated YARN-7321:

Attachment: YARN-7321-branch-2.001.patch

Attached a branch-2 version of the patch. It just drops all the gpu and cgroups 
module files from YARN-6852,

> Backport container-executor changes from YARN-6852 to branch-2
> --
>
> Key: YARN-7321
> URL: https://issues.apache.org/jira/browse/YARN-7321
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Attachments: YARN-7321-branch-2.001.patch
>
>
> YARN-6852 added support for GPUs to container-executor. It also re-factored 
> the container-executor code to add support for modules. The non-GPU changes 
> need to be backported to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-12 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7317:
---
Fix Version/s: 3.0.0
   2.9.0

> Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7317
> URL: https://issues.apache.org/jira/browse/YARN-7317
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 2.9.0, 3.0.0
>
> Attachments: YARN-7317.v1.patch, YARN-7317.v2.patch, 
> YARN-7317.v3.patch, YARN-7317.v4.patch
>
>
> When LocalityMulticastAMRMProxyPolicy is splitting up the ANY requests into 
> different subclusters, we are doing Ceil(N * weight), leading to 
> overallocation overall. It is better to do Floor(N * weight) for each 
> subcluster and then assign the residue randomly according to the weights. So 
> that the total number of containers we ask from all subclusters sum up to be 
> N. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-12 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202335#comment-16202335
 ] 

Haibo Chen edited comment on YARN-4511 at 10/12/17 5:45 PM:


Thanks for the background on YARN-5139, [~leftnoteasy]. 

My understanding of SchedulerNode from scheduler's perspective is that it keeps 
track of the set of allocated containers on a given node and how much
resources of the node are being in use or left for allocation. The 
SchedulerNode is notified whenever there is a container allocated, launched or 
released
on that node to update its bookkeeping. The major change of SchedulerNode in 
this patch is to account for Opportunistic containers in a different way
than we do for Guaranteed containers. Specifically, we don't include resources 
of Opportunistic container in SchedulerNode.allocatedResource. A quick
look at Capacity Scheduler shows me that SchedulerNode is notified of container 
allocation only when allocation proposal is accepted, so I believe this
patch won't change how YARN-5139 behaves.  

allocationInThisHeartbeat, however, does need to be changed given the way 
scheduling is not driven by node heartbeat in YARN-5139.
The purpose of this variable is to track how much resources allocated 
containers that have not yet launched are going to use (based on resource 
request,
since they can use all resources they have requested in the worst case if they 
were to run on the node). To illustrate the workflow of this patch and what
allocationInThisHeartbeat is for, let's say on a node of 10 GB of memory, there 
are already 10 containers running (in aggregate requested 10GB of memory)
and the resource utlization reported in the node heartbeat is 5GB of memory, 
there are 2 containers that are just allocated but not yet launched and they
two together request 2GB of memory. In the case of oversubscription, scheduler 
will try to allocate Opportunistic containers based on node resource 
utilization.
5GB is what the running containers are using and 2GB is probably soon to be 
utilized, so the scheduler will think I'd better assume that the resource 
utilization
is 7GB and so only 3GB is left, then decide whether to continue to allocate 
OPPORTUNISTIC containers given the node's overallocation threshold. How the 
3GB is calculated is done by allowedResourceForOverAllocation() and 
allocationInThisHeartbeat.

I am thinking of decoupling allocationInThisHeart from node heartbeat by 
renaming it to resourcesOfContainersPendingLaunch and update it in 
containerStarted()
method instead of resetting every node heartbeat. Let me know what you think.

bq. I'm not sure why we need a separate launchedOnNode flag because we already 
have a launchedContainer map.
This is indeed confusing. The launchedContainer should probably be renamed to 
allocatedContainer and launchedOnNode is to track whether the allocated
container is actually launched on the node. This piece code already exists. I 
can do the renaming if you are fine with it.

bq.  otherwise it gonna be very hard to modify defined protos in a future 
release.
Very much for the same reason you are thinking of here, I am more inclined to 
keep OverAllocationInfo for now. I am not sure if we just have 
ResourceThresholds,
how we can keep backward compatibility in a clean way if we ever want to 
include more for NM overallocation configs. I agree we should do the 
consolidating
with resource profiles before the release, I think we can revisit this topic 
then.



was (Author: haibochen):
Thanks for the background on YARN-5139, [~leftnoteasy]. 

My understanding of SchedulerNode from scheduler's perspective is that it keeps 
track of the set of allocated containers on a given node and how much
resources of the node are being in use or left for allocation. The 
SchedulerNode is notified whenever there is a container allocated, launched or 
released
on that node to update its bookkeeping. The major change of SchedulerNode in 
this patch is to account for Opportunistic containers in a different way
than we do for Guaranteed containers. Specifically, we don't include resources 
of Opportunistic container in SchedulerNode.allocatedResource. A quick
look at Capacity Scheduler shows me that SchedulerNode is notified of container 
allocation only when allocation proposal is accepted, so I believe this
patch won't change how YARN-5139 behaves.  

{code:java}allocationInThisHeartbeat{code}, however, does need to be changed 
given the way scheduling is not driven by node heartbeat in YARN-5139.
The purpose of this variable is to track how much resources allocated 
containers that have not yet launched are going to use (based on resource 
request,
since they can use all resources they have requested in the worst case if they 
were to run on the node). To illustrate the workflow of this patch and what
allocationInThisHeartbeat is for, let's say on a node 

[jira] [Updated] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-12 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7317:
---
Affects Version/s: (was: 3.0.0)
   (was: 2.9.0)

> Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7317
> URL: https://issues.apache.org/jira/browse/YARN-7317
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7317.v1.patch, YARN-7317.v2.patch, 
> YARN-7317.v3.patch, YARN-7317.v4.patch
>
>
> When LocalityMulticastAMRMProxyPolicy is splitting up the ANY requests into 
> different subclusters, we are doing Ceil(N * weight), leading to 
> overallocation overall. It is better to do Floor(N * weight) for each 
> subcluster and then assign the residue randomly according to the weights. So 
> that the total number of containers we ask from all subclusters sum up to be 
> N. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7317) Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy

2017-10-12 Thread Carlo Curino (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carlo Curino updated YARN-7317:
---
Affects Version/s: 3.0.0
   2.9.0

> Fix overallocation resulted from ceiling in LocalityMulticastAMRMProxyPolicy
> 
>
> Key: YARN-7317
> URL: https://issues.apache.org/jira/browse/YARN-7317
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Attachments: YARN-7317.v1.patch, YARN-7317.v2.patch, 
> YARN-7317.v3.patch, YARN-7317.v4.patch
>
>
> When LocalityMulticastAMRMProxyPolicy is splitting up the ANY requests into 
> different subclusters, we are doing Ceil(N * weight), leading to 
> overallocation overall. It is better to do Floor(N * weight) for each 
> subcluster and then assign the residue randomly according to the weights. So 
> that the total number of containers we ask from all subclusters sum up to be 
> N. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4511) Common scheduler changes supporting scheduler-specific implementations

2017-10-12 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202335#comment-16202335
 ] 

Haibo Chen commented on YARN-4511:
--

Thanks for the background on YARN-5139, [~leftnoteasy]. 

My understanding of SchedulerNode from scheduler's perspective is that it keeps 
track of the set of allocated containers on a given node and how much
resources of the node are being in use or left for allocation. The 
SchedulerNode is notified whenever there is a container allocated, launched or 
released
on that node to update its bookkeeping. The major change of SchedulerNode in 
this patch is to account for Opportunistic containers in a different way
than we do for Guaranteed containers. Specifically, we don't include resources 
of Opportunistic container in SchedulerNode.allocatedResource. A quick
look at Capacity Scheduler shows me that SchedulerNode is notified of container 
allocation only when allocation proposal is accepted, so I believe this
patch won't change how YARN-5139 behaves.  

{code:java}allocationInThisHeartbeat{code}, however, does need to be changed 
given the way scheduling is not driven by node heartbeat in YARN-5139.
The purpose of this variable is to track how much resources allocated 
containers that have not yet launched are going to use (based on resource 
request,
since they can use all resources they have requested in the worst case if they 
were to run on the node). To illustrate the workflow of this patch and what
allocationInThisHeartbeat is for, let's say on a node of 10 GB of memory, there 
are already 10 containers running (in aggregate requested 10GB of memory)
and the resource utlization reported in the node heartbeat is 5GB of memory, 
there are 2 containers that are just allocated but not yet launched and they
two together request 2GB of memory. In the case of oversubscription, scheduler 
will try to allocate Opportunistic containers based on node resource 
utilization.
5GB is what the running containers are using and 2GB is probably soon to be 
utilized, so the scheduler will think I'd better assume that the resource 
utilization
is 7GB and so only 3GB is left, then decide whether to continue to allocate 
OPPORTUNISTIC containers given the node's overallocation threshold. How the 
3GB is calculated is done by allowedResourceForOverAllocation() and 
allocationInThisHeartbeat.

I am thinking of decoupling allocationInThisHeart from node heartbeat by 
renaming it to resourcesOfContainersPendingLaunch and update it in 
containerStarted()
method instead of resetting every node heartbeat. Let me know what you think.

bq. I'm not sure why we need a separate launchedOnNode flag because we already 
have a launchedContainer map.
This is indeed confusing. The launchedContainer should probably be renamed to 
allocatedContainer and launchedOnNode is to track whether the allocated
container is actually launched on the node. This piece code already exists. I 
can do the renaming if you are fine with it.

bq.  otherwise it gonna be very hard to modify defined protos in a future 
release.
Very much for the same reason you are thinking of here, I am more inclined to 
keep OverAllocationInfo for now. I am not sure if we just have 
ResourceThresholds,
how we can keep backward compatibility in a clean way if we ever want to 
include more for NM overallocation configs. I agree we should do the 
consolidating
with resource profiles before the release, I think we can revisit this topic 
then.


> Common scheduler changes supporting scheduler-specific implementations
> --
>
> Key: YARN-4511
> URL: https://issues.apache.org/jira/browse/YARN-4511
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Haibo Chen
> Attachments: YARN-4511-YARN-1011.00.patch, 
> YARN-4511-YARN-1011.01.patch, YARN-4511-YARN-1011.02.patch, 
> YARN-4511-YARN-1011.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-7321) Backport container-executor changes from YARN-6852 to branch-2

2017-10-12 Thread Varun Vasudev (JIRA)
Varun Vasudev created YARN-7321:
---

 Summary: Backport container-executor changes from YARN-6852 to 
branch-2
 Key: YARN-7321
 URL: https://issues.apache.org/jira/browse/YARN-7321
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev


YARN-6852 added support for GPUs to container-executor. It also re-factored the 
container-executor code to add support for modules. The non-GPU changes need to 
be backported to branch-2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7202) Add UT for api-server

2017-10-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202329#comment-16202329
 ] 

Hadoop QA commented on YARN-7202:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} yarn-native-services Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
39s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
2s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
46s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 21s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} yarn-native-services passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} yarn-native-services passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} root: The patch generated 0 new + 9 unchanged - 4 
fixed = 9 total (was 13) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 39s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
14s{color} | {color:green} hadoop-yarn-services-core in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-services-api in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Created] (YARN-7320) Duplicate LiteralByteStrings in SystemCredentialsForAppsProto.credentialsForApp_

2017-10-12 Thread Misha Dmitriev (JIRA)
Misha Dmitriev created YARN-7320:


 Summary: Duplicate LiteralByteStrings in 
SystemCredentialsForAppsProto.credentialsForApp_
 Key: YARN-7320
 URL: https://issues.apache.org/jira/browse/YARN-7320
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Misha Dmitriev


Using jxray (www.jxray.com) I've analyzed several heap dumps from YARN Resource 
Manager running in a big cluster. The tool uncovered several sources of memory 
waste. One problem, which results in wasting more than a quarter of all memory, 
is a large number of duplicate {{LiteralByteString}} objects coming from the 
following reference chain:

{code}
1,011,810K (26.9%): byte[]: 5416705 / 100% dup arrays (22108 unique)
↖com.google.protobuf.LiteralByteString.bytes
↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$.credentialsForApp_
↖{j.u.ArrayList}
↖j.u.Collections$UnmodifiableRandomAccessList.c
↖org.apache.hadoop.yarn.proto.YarnServerCommonServiceProtos$NodeHeartbeatResponseProto.systemCredentialsForApps_
↖org.apache.hadoop.yarn.server.api.protocolrecords.impl.pb.NodeHeartbeatResponsePBImpl.proto
↖org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeImpl.latestNodeHeartBeatResponse
↖org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FSSchedulerNode.rmNode
...
{code}

That is, collectively reference chains that look as above hold in memory 5.4 
million {{LiteralByteString}} objects, but only ~22 thousand of these objects 
are unique. Deduplicating these objects, e.g. using a Google Object Interner 
instance, would save ~1GB of memory.

It looks like the main place where the above {{LiteralByteString}}s are created 
and attached to the {{SystemCredentialsForAppsProto}} objects is in 
{{NodeHeartbeatResponsePBImpl.java}}, method {{addSystemCredentialsToProto()}}. 
Probably adding a call to an interner there will fix the problem. wi 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-10-12 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev resolved YARN-6852.
-
Resolution: Fixed

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6852.001.patch, YARN-6852.002.patch, 
> YARN-6852.003.patch, YARN-6852.004.patch, YARN-6852.005.patch, 
> YARN-6852.006.patch, YARN-6852.007.patch, YARN-6852.008.patch, 
> YARN-6852.009.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-10-12 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202313#comment-16202313
 ] 

Varun Vasudev commented on YARN-6852:
-

Sounds good, I'll create a new ticket for that then and close this.

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6852.001.patch, YARN-6852.002.patch, 
> YARN-6852.003.patch, YARN-6852.004.patch, YARN-6852.005.patch, 
> YARN-6852.006.patch, YARN-6852.007.patch, YARN-6852.008.patch, 
> YARN-6852.009.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-12 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202279#comment-16202279
 ] 

Carlo Curino commented on YARN-6608:


[~wangda] 

# {{TestDebugOverflowUserLimit}} should be removed from the patch, as it is a 
WIP test I was working on to capture the {{Resources.divideAndCeil}} overflow 
for large clusters. I will add it as part of a separate patch.

ACK on everything else.

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch, 
> YARN-6608-branch-2.v5.patch, YARN-6608-branch-2.v6.patch, 
> YARN-6608-branch-2.v7.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-10-12 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202236#comment-16202236
 ] 

Wangda Tan commented on YARN-6852:
--

[~vvasudev], I would prefer to only pull in common changes since rest of GPU 
logics cannot be landed to branch-2. (We don't have resource-profile, etc.). 
Few changes need to make, please let me know if you need any help from my side.

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6852.001.patch, YARN-6852.002.patch, 
> YARN-6852.003.patch, YARN-6852.004.patch, YARN-6852.005.patch, 
> YARN-6852.006.patch, YARN-6852.007.patch, YARN-6852.008.patch, 
> YARN-6852.009.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-6852) [YARN-6223] Native code changes to support isolate GPU devices by using CGroups

2017-10-12 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev reopened YARN-6852:
-

[~leftnoteasy] - Any objections to backporting this to branch-2? It has changes 
to container-executor that I would like to pull to branch-2. Thanks!

> [YARN-6223] Native code changes to support isolate GPU devices by using 
> CGroups
> ---
>
> Key: YARN-6852
> URL: https://issues.apache.org/jira/browse/YARN-6852
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Wangda Tan
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6852.001.patch, YARN-6852.002.patch, 
> YARN-6852.003.patch, YARN-6852.004.patch, YARN-6852.005.patch, 
> YARN-6852.006.patch, YARN-6852.007.patch, YARN-6852.008.patch, 
> YARN-6852.009.patch
>
>
> This JIRA plan to add support of:
> 1) Isolation in CGroups. (native side).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7254) UI and metrics changes related to absolute resource configuration

2017-10-12 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated YARN-7254:
--
Attachment: YARN-7254.YARN-5881.006.patch

Updating v6 patch after fixing javadoc and findbugs.

Some test case failures are known. I will wait for one more jenkins run to 
verify test case failures

> UI and metrics changes related to absolute resource configuration
> -
>
> Key: YARN-7254
> URL: https://issues.apache.org/jira/browse/YARN-7254
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler
>Reporter: Sunil G
>Assignee: Sunil G
> Attachments: YARN-7254.001.patch, YARN-7254.002.patch, 
> YARN-7254.YARN-5881.002.patch, YARN-7254.YARN-5881.003.patch, 
> YARN-7254.YARN-5881.004.patch, YARN-7254.YARN-5881.005.patch, 
> YARN-7254.YARN-5881.006.patch
>
>
> Impact on UI and metrics related to absolute resource configuration on CS



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6668) Use cgroup to get container resource utilization

2017-10-12 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202192#comment-16202192
 ] 

Miklos Szegedi commented on YARN-6668:
--

Indeed this is a duplicate. How should we coordinate?

> Use cgroup to get container resource utilization
> 
>
> Key: YARN-6668
> URL: https://issues.apache.org/jira/browse/YARN-6668
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Miklos Szegedi
> Attachments: YARN-6668.000.patch, YARN-6668.001.patch, 
> YARN-6668.002.patch, YARN-6668.003.patch, YARN-6668.004.patch, 
> YARN-6668.005.patch, YARN-6668.006.patch, YARN-6668.007.patch, 
> YARN-6668.008.patch, YARN-6668.009.patch
>
>
> Container Monitor relies on proc file system to get container resource 
> utilization, which is not as efficient as reading cgroup accounting. We 
> should in NM, when cgroup is enabled, read cgroup stats instead. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6608) Backport all SLS improvements from trunk to branch-2

2017-10-12 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6608:
-
Attachment: YARN-6608-branch-2.v7.patch

> Backport all SLS improvements from trunk to branch-2
> 
>
> Key: YARN-6608
> URL: https://issues.apache.org/jira/browse/YARN-6608
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Carlo Curino
>Assignee: Carlo Curino
> Attachments: YARN-6608-branch-2.v0.patch, 
> YARN-6608-branch-2.v1.patch, YARN-6608-branch-2.v2.patch, 
> YARN-6608-branch-2.v3.patch, YARN-6608-branch-2.v4.patch, 
> YARN-6608-branch-2.v5.patch, YARN-6608-branch-2.v6.patch, 
> YARN-6608-branch-2.v7.patch
>
>
> The SLS has received lots of attention in trunk, but only some of it made it 
> back to branch-2. This patch is a "raw" fork-lift of the trunk development 
> from hadoop-tools/hadoop-sls.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6033) Add support for sections in container-executor configuration file

2017-10-12 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-6033:
-
Fix Version/s: 2.9.0

> Add support for sections in container-executor configuration file
> -
>
> Key: YARN-6033
> URL: https://issues.apache.org/jira/browse/YARN-6033
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6033-YARN-5673.001.patch, 
> YARN-6033-YARN-5673.002.patch, YARN-6033-branch-2.014.patch, 
> YARN-6033.003.patch, YARN-6033.004.patch, YARN-6033.005.patch, 
> YARN-6033.006.patch, YARN-6033.007.patch, YARN-6033.008.patch, 
> YARN-6033.009.patch, YARN-6033.010.patch, YARN-6033.011.patch, 
> YARN-6033.012.patch, YARN-6033.013.patch, YARN-6033.014.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4163) Audit getQueueInfo and getApplications calls

2017-10-12 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16202180#comment-16202180
 ] 

Jason Lowe commented on YARN-4163:
--

Thanks for updating the patch!  The builder pattern solves the ordering problem 
nicely.

The more I look at it, the more I'm torn on the addition of the new args 
functionality.  Looking at the existing keys or even the new QUEUE key that is 
being added as part of this patch, arguably most of them can be considered as 
arguments to the particular OPERATION.  That makes the addition of the new 
{{includeApplications}}, {{includeChildQueues}}, and {{recursive}} arguments 
inconsistent with the others.  They're not upper case, so they stick out.  Why 
is QUEUE upper case but other arguments in the client request are not?  I think 
logged keys should be consistent or it is going to seem arbitrarily different 
to end users.  Making these new args keys also helps cement them a bit more 
from a compatibility perspective.  The args builder pattern could be updated to 
take a Key enum rather than an arbitrary string.  Thoughts?  I could also see 
going with a new ARGS key that lists the arguments, although that makes it a 
bit less cemented with respect to log format and backwards-compatibility.

The "Use List to preserve order." comment is no longer necessary

It would be good to clean up the checkstyle issues not related to arg counts, 
although if we do go with the builder pattern for generating logs then it could 
make sense to address that as well.

> Audit getQueueInfo and getApplications calls
> 
>
> Key: YARN-4163
> URL: https://issues.apache.org/jira/browse/YARN-4163
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: YARN-4163.004.patch, YARN-4163.005.patch, 
> YARN-4163.006.branch-2.8.patch, YARN-4163.006.patch, YARN-4163.2.patch, 
> YARN-4163.2.patch, YARN-4163.3.patch, YARN-4163.patch
>
>
> getQueueInfo and getApplications seem to sometimes cause spike of load but 
> not able to confirm due to they are not audit logged. This patch propose to 
> add them to audit log



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7202) Add UT for api-server

2017-10-12 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated YARN-7202:

Attachment: YARN-7202.yarn-native-services.014.patch

- Update setApiServer to setServiceClient for correctness.
- Remove dependencies that are not in use.

> Add UT for api-server
> -
>
> Key: YARN-7202
> URL: https://issues.apache.org/jira/browse/YARN-7202
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Jian He
>Assignee: Eric Yang
> Attachments: YARN-7202.yarn-native-services.001.patch, 
> YARN-7202.yarn-native-services.002.patch, 
> YARN-7202.yarn-native-services.003.patch, 
> YARN-7202.yarn-native-services.004.patch, 
> YARN-7202.yarn-native-services.005.patch, 
> YARN-7202.yarn-native-services.006.patch, 
> YARN-7202.yarn-native-services.007.patch, 
> YARN-7202.yarn-native-services.008.patch, 
> YARN-7202.yarn-native-services.011.patch, 
> YARN-7202.yarn-native-services.012.patch, 
> YARN-7202.yarn-native-services.013.patch, 
> YARN-7202.yarn-native-services.014.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >