[jira] [Commented] (YARN-8047) RMWebApp make external class pluggable

2018-06-21 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519998#comment-16519998
 ] 

genericqa commented on YARN-8047:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
9s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 14s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  4s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 52s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m  3s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}152m 40s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
|   | hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8047 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12925307/YARN-8047-001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8371519660e3 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git 

[jira] [Commented] (YARN-8412) Move ResourceRequest.clone logic everywhere into a proper API

2018-06-21 Thread Botong Huang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519935#comment-16519935
 ] 

Botong Huang commented on YARN-8412:


Great, thx [~elgoiri]!

> Move ResourceRequest.clone logic everywhere into a proper API
> -
>
> Key: YARN-8412
> URL: https://issues.apache.org/jira/browse/YARN-8412
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 2.10.0, 3.2.0
>
> Attachments: YARN-8412-branch-2.v2.patch, YARN-8412.v1.patch, 
> YARN-8412.v2.patch
>
>
> ResourceRequest.clone code is replicated in lots of places, some missing to 
> copy one field or two due to new fields added over time. This JIRA attempts 
> to move them into a proper API so that everyone can use this single 
> implementation. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8412) Move ResourceRequest.clone logic everywhere into a proper API

2018-06-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519907#comment-16519907
 ] 

Hudson commented on YARN-8412:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14461 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14461/])
YARN-8412. Move ResourceRequest.clone logic everywhere into a proper (inigoiri: 
rev 99948565cb5d5706241d7a8fc591e1617c499e03)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/AMRMClientImpl.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/TestAppManager.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/AMRMClientRelayer.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/LocalityAppPlacementAllocator.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/amrmproxy/LocalityMulticastAMRMProxyPolicy.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMClientOnRMRestart.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/scheduler/ResourceRequestSet.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/Application.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/ResourceRequest.java


> Move ResourceRequest.clone logic everywhere into a proper API
> -
>
> Key: YARN-8412
> URL: https://issues.apache.org/jira/browse/YARN-8412
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 2.10.0, 3.2.0
>
> Attachments: YARN-8412-branch-2.v2.patch, YARN-8412.v1.patch, 
> YARN-8412.v2.patch
>
>
> ResourceRequest.clone code is replicated in lots of places, some missing to 
> copy one field or two due to new fields added over time. This JIRA attempts 
> to move them into a proper API so that everyone can use this single 
> implementation. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8412) Move ResourceRequest.clone logic everywhere into a proper API

2018-06-21 Thread JIRA


[ 
https://issues.apache.org/jira/browse/YARN-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519904#comment-16519904
 ] 

Íñigo Goiri commented on YARN-8412:
---

Thanks [~botong] for the patch.
Committed to trunk and branch-2.

> Move ResourceRequest.clone logic everywhere into a proper API
> -
>
> Key: YARN-8412
> URL: https://issues.apache.org/jira/browse/YARN-8412
> Project: Hadoop YARN
>  Issue Type: Task
>Reporter: Botong Huang
>Assignee: Botong Huang
>Priority: Minor
> Fix For: 2.10.0, 3.2.0
>
> Attachments: YARN-8412-branch-2.v2.patch, YARN-8412.v1.patch, 
> YARN-8412.v2.patch
>
>
> ResourceRequest.clone code is replicated in lots of places, some missing to 
> copy one field or two due to new fields added over time. This JIRA attempts 
> to move them into a proper API so that everyone can use this single 
> implementation. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8438) TestContainer.testKillOnNew flaky on trunk

2018-06-21 Thread Miklos Szegedi (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519861#comment-16519861
 ] 

Miklos Szegedi commented on YARN-8438:
--

Thank you for the patch [~snemeth].

This will still return 1000,1001,1000 for three requests within the same 
millisecond 1000. Also, synchronized can be applied to the function header. 
Instead of reflection you could use inheritance.

> TestContainer.testKillOnNew flaky on trunk
> --
>
> Key: YARN-8438
> URL: https://issues.apache.org/jira/browse/YARN-8438
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8438.001.patch, YARN-8438.002.patch
>
>
> Running this test several times (e.g. 30), it fails ~5-10 times.
> Stacktrace: 
> {code:java}
> java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.junit.Assert.assertTrue(Assert.java:52) at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.TestContainer.testKillOnNew(TestContainer.java:594)
> {code}
> TestContainer:594 is the following code in trunk, currently:
> {code:java}
> Assert.assertTrue( containerMetrics.finishTime.value() > 
> containerMetrics.startTime .value());
> {code}
> So sometimes the finish time is not greater than the start time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1013) CS should watch resource utilization of containers and allocate speculative containers if appropriate

2018-06-21 Thread Haibo Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519860#comment-16519860
 ] 

Haibo Chen commented on YARN-1013:
--

[~cheersyang] FYI, there is YARN-6794 that does container promotion in Fair 
Scheduler. We have not filed a counter-part Jira for capacity scheduler yet.

> CS should watch resource utilization of containers and allocate speculative 
> containers if appropriate
> -
>
> Key: YARN-1013
> URL: https://issues.apache.org/jira/browse/YARN-1013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun C Murthy
>Assignee: Weiwei Yang
>Priority: Major
>
> CS should watch resource utilization of containers (provided by NM in 
> heartbeat) and allocate speculative containers (at lower OS priority) if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-1013) CS should watch resource utilization of containers and allocate speculative containers if appropriate

2018-06-21 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/YARN-1013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned YARN-1013:
-

Assignee: Weiwei Yang  (was: Arun C Murthy)

> CS should watch resource utilization of containers and allocate speculative 
> containers if appropriate
> -
>
> Key: YARN-1013
> URL: https://issues.apache.org/jira/browse/YARN-1013
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun C Murthy
>Assignee: Weiwei Yang
>Priority: Major
>
> CS should watch resource utilization of containers (provided by NM in 
> heartbeat) and allocate speculative containers (at lower OS priority) if 
> appropriate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6586) YARN to facilitate HTTPS in AM web server

2018-06-21 Thread Robert Kanter (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519751#comment-16519751
 ] 

Robert Kanter commented on YARN-6586:
-

Created subtasks:
# YARN-8448: to do the certificate generation and distribute the 
keystore/truststore (steps 1 - 3 in the doc)
# MAPREDUCE-4669: To make the MR AM use YARN-8448 (step 4 in the doc)
# YARN-8449: to handle RM HA for YARN-8448 (i.e. RMStateStore work)

> YARN to facilitate HTTPS in AM web server
> -
>
> Key: YARN-6586
> URL: https://issues.apache.org/jira/browse/YARN-6586
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Affects Versions: 3.0.0-alpha2
>Reporter: Haibo Chen
>Assignee: Robert Kanter
>Priority: Major
> Attachments: Design Document v1.pdf, YARN-6586.poc.patch
>
>
> MR AM today does not support HTTPS in its web server, so the traffic between 
> RMWebproxy and MR AM is in clear text.
> MR cannot easily achieve this mainly because MR AMs are untrusted by YARN. A 
> potential solution purely within MR, similar to what Spark has implemented, 
> is to allow users, when they enable HTTPS in MR job, to provide their own 
> keystore file, and then the file is uploaded to distributed cache and 
> localized for MR AM container. The configuration users need to do is complex.
> More importantly, in typical deployments, however, web browsers go through 
> RMWebProxy to indirectly access MR AM web server. In order to support MR AM 
> HTTPs, RMWebProxy therefore needs to trust the user-provided keystore, which 
> is problematic.  
> Alternatively, we can add an endpoint in NM web server that acts as a proxy 
> between AM web server and RMWebProxy. RMWebproxy, when configured to do so, 
> will send requests in HTTPS to the NM on which the AM is running, and the NM 
> then can communicate with the local AM web server in HTTP.   This adds one 
> hop between RMWebproxy and AM, but both MR and Spark can use such solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8449) RM HA for AM HTTPS Support

2018-06-21 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-8449:
---

 Summary: RM HA for AM HTTPS Support
 Key: YARN-8449
 URL: https://issues.apache.org/jira/browse/YARN-8449
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Robert Kanter
Assignee: Robert Kanter






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8448) AM HTTPS Support

2018-06-21 Thread Robert Kanter (JIRA)
Robert Kanter created YARN-8448:
---

 Summary: AM HTTPS Support
 Key: YARN-8448
 URL: https://issues.apache.org/jira/browse/YARN-8448
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Robert Kanter
Assignee: Robert Kanter






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8447) Support configurable IPC mode in docker runtime

2018-06-21 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-8447:


 Summary: Support configurable IPC mode in docker runtime
 Key: YARN-8447
 URL: https://issues.apache.org/jira/browse/YARN-8447
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Billie Rinaldi
Assignee: Billie Rinaldi


For features that require shared memory segments (such as short circuit reads), 
we should support configuring the IPC namespace in the docker runtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6966) NodeManager metrics may return wrong negative values when NM restart

2018-06-21 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519599#comment-16519599
 ] 

genericqa commented on YARN-6966:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
24s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 10s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
56s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-6966 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928643/YARN-6966.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 096162f9277b 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 9f15483 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21074/testReport/ |
| Max. process+thread count | 407 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21074/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NodeManager metrics may return wrong negative values when NM 

[jira] [Commented] (YARN-4606) CapacityScheduler: applications could get starved because computation of #activeUsers considers pending apps

2018-06-21 Thread Eric Payne (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-4606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519562#comment-16519562
 ] 

Eric Payne commented on YARN-4606:
--

[~maniraj...@gmail.com], I am fine with using this JIRA to fix the 
{{CapacityScheduler}} and then using follow-on JIRAs to fix the other 
schedulers. However, I'm not comfortable putting {{CapacityScheduler}}-specific 
code in {{AppSchedulingInfo}}. I'm hoping that most of this code can be pushed 
down into the {{ActiveUsersManager}} (for {{FairScheduler}}) and 
{{UsersManager}} (for {{CapacityScheduler}}) code.

I am investigating this now and should know if this is possible by early next 
week.

> CapacityScheduler: applications could get starved because computation of 
> #activeUsers considers pending apps 
> -
>
> Key: YARN-4606
> URL: https://issues.apache.org/jira/browse/YARN-4606
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, capacityscheduler
>Affects Versions: 2.8.0, 2.7.1
>Reporter: Karam Singh
>Assignee: Manikandan R
>Priority: Critical
> Attachments: YARN-4606.001.patch, YARN-4606.002.patch, 
> YARN-4606.003.patch, YARN-4606.004.patch, YARN-4606.1.poc.patch, 
> YARN-4606.POC.2.patch, YARN-4606.POC.patch
>
>
> Currently, if all applications belong to same user in LeafQueue are pending 
> (caused by max-am-percent, etc.), ActiveUsersManager still considers the user 
> is an active user. This could lead to starvation of active applications, for 
> example:
> - App1(belongs to user1)/app2(belongs to user2) are active, app3(belongs to 
> user3)/app4(belongs to user4) are pending
> - ActiveUsersManager returns #active-users=4
> - However, there're only two users (user1/user2) are able to allocate new 
> resources. So computed user-limit-resource could be lower than expected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8444) NodeResourceMonitor crashes on bad swapFree value

2018-06-21 Thread Jim Brennan (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519549#comment-16519549
 ] 

Jim Brennan commented on YARN-8444:
---

[~eepayne], can you please review?

 

> NodeResourceMonitor crashes on bad swapFree value
> -
>
> Key: YARN-8444
> URL: https://issues.apache.org/jira/browse/YARN-8444
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.8.3, 3.0.2
>Reporter: Jim Brennan
>Assignee: Jim Brennan
>Priority: Major
> Attachments: YARN-8444.001.patch
>
>
> Saw this on a node that was running out of memory. Can't have 
> NodeResourceMonitor exiting. System was above 99% memory used at the time, so 
> this is not a common occurrence, but we should fix since this is a critical 
> monitor to the health of the node.
>  
> {noformat}
> 2018-06-04 14:28:08,539 [Container Monitor] DEBUG 
> ContainersMonitorImpl.audit: Memory usage of ProcessTree 110564 for 
> container-id container_e24_1526662705797_129647_01_004791: 2.1 GB of 3.5 GB 
> physical memory used; 5.0 GB of 7.3 GB virtual memory used
> 2018-06-04 14:28:10,622 [Node Resource Monitor] ERROR 
> yarn.YarnUncaughtExceptionHandler: Thread Thread[Node Resource 
> Monitor,5,main] threw an Exception.
> java.lang.NumberFormatException: For input string: "18446744073709551596"
>  at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>  at java.lang.Long.parseLong(Long.java:592)
>  at java.lang.Long.parseLong(Long.java:631)
>  at 
> org.apache.hadoop.util.SysInfoLinux.readProcMemInfoFile(SysInfoLinux.java:257)
>  at 
> org.apache.hadoop.util.SysInfoLinux.getAvailablePhysicalMemorySize(SysInfoLinux.java:591)
>  at 
> org.apache.hadoop.util.SysInfoLinux.getAvailableVirtualMemorySize(SysInfoLinux.java:601)
>  at 
> org.apache.hadoop.yarn.util.ResourceCalculatorPlugin.getAvailableVirtualMemorySize(ResourceCalculatorPlugin.java:74)
>  at 
> org.apache.hadoop.yarn.server.nodemanager.NodeResourceMonitorImpl$MonitoringThread.run(NodeResourceMonitorImpl.java:193)
> 2018-06-04 14:28:30,747 
> [org.apache.hadoop.util.JvmPauseMonitor$Monitor@226eba67] INFO 
> util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of 
> approximately 9330ms
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6966) NodeManager metrics may return wrong negative values when NM restart

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-6966:
-
Attachment: YARN-6966.004.patch

> NodeManager metrics may return wrong negative values when NM restart
> 
>
> Key: YARN-6966
> URL: https://issues.apache.org/jira/browse/YARN-6966
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
> Attachments: YARN-6966.001.patch, YARN-6966.002.patch, 
> YARN-6966.003.patch, YARN-6966.004.patch
>
>
> Just as YARN-6212. However, I think it is not a duplicate of YARN-3933.
> The primary cause of negative values is that metrics do not recover properly 
> when NM restart.
> AllocatedContainers,ContainersLaunched,AllocatedGB,AvailableGB,AllocatedVCores,AvailableVCores
>  in metrics also need to recover when NM restart.
> This should be done in ContainerManagerImpl#recoverContainer.
> The scenario could be reproduction by the following steps:
> # Make sure 
> YarnConfiguration.NM_RECOVERY_ENABLED=true,YarnConfiguration.NM_RECOVERY_SUPERVISED=true
>  in NM
> # Submit an application and keep running
> # Restart NM
> # Stop the application
> # Now you get the negative values
> {code}
> /jmx?qry=Hadoop:service=NodeManager,name=NodeManagerMetrics
> {code}
> {code}
> {
> name: "Hadoop:service=NodeManager,name=NodeManagerMetrics",
> modelerType: "NodeManagerMetrics",
> tag.Context: "yarn",
> tag.Hostname: "hadoop.com",
> ContainersLaunched: 0,
> ContainersCompleted: 0,
> ContainersFailed: 2,
> ContainersKilled: 0,
> ContainersIniting: 0,
> ContainersRunning: 0,
> AllocatedGB: 0,
> AllocatedContainers: -2,
> AvailableGB: 160,
> AllocatedVCores: -11,
> AvailableVCores: 3611,
> ContainerLaunchDurationNumOps: 2,
> ContainerLaunchDurationAvgTime: 6,
> BadLocalDirs: 0,
> BadLogDirs: 0,
> GoodLocalDirsDiskUtilizationPerc: 2,
> GoodLogDirsDiskUtilizationPerc: 2
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6966) NodeManager metrics may return wrong negative values when NM restart

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-6966:
-
Attachment: (was: YARN-6966.004.patch)

> NodeManager metrics may return wrong negative values when NM restart
> 
>
> Key: YARN-6966
> URL: https://issues.apache.org/jira/browse/YARN-6966
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
> Attachments: YARN-6966.001.patch, YARN-6966.002.patch, 
> YARN-6966.003.patch
>
>
> Just as YARN-6212. However, I think it is not a duplicate of YARN-3933.
> The primary cause of negative values is that metrics do not recover properly 
> when NM restart.
> AllocatedContainers,ContainersLaunched,AllocatedGB,AvailableGB,AllocatedVCores,AvailableVCores
>  in metrics also need to recover when NM restart.
> This should be done in ContainerManagerImpl#recoverContainer.
> The scenario could be reproduction by the following steps:
> # Make sure 
> YarnConfiguration.NM_RECOVERY_ENABLED=true,YarnConfiguration.NM_RECOVERY_SUPERVISED=true
>  in NM
> # Submit an application and keep running
> # Restart NM
> # Stop the application
> # Now you get the negative values
> {code}
> /jmx?qry=Hadoop:service=NodeManager,name=NodeManagerMetrics
> {code}
> {code}
> {
> name: "Hadoop:service=NodeManager,name=NodeManagerMetrics",
> modelerType: "NodeManagerMetrics",
> tag.Context: "yarn",
> tag.Hostname: "hadoop.com",
> ContainersLaunched: 0,
> ContainersCompleted: 0,
> ContainersFailed: 2,
> ContainersKilled: 0,
> ContainersIniting: 0,
> ContainersRunning: 0,
> AllocatedGB: 0,
> AllocatedContainers: -2,
> AvailableGB: 160,
> AllocatedVCores: -11,
> AvailableVCores: 3611,
> ContainerLaunchDurationNumOps: 2,
> ContainerLaunchDurationAvgTime: 6,
> BadLocalDirs: 0,
> BadLogDirs: 0,
> GoodLocalDirsDiskUtilizationPerc: 2,
> GoodLogDirsDiskUtilizationPerc: 2
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6966) NodeManager metrics may return wrong negative values when NM restart

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-6966:
-
Attachment: YARN-6966.004.patch

> NodeManager metrics may return wrong negative values when NM restart
> 
>
> Key: YARN-6966
> URL: https://issues.apache.org/jira/browse/YARN-6966
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
> Attachments: YARN-6966.001.patch, YARN-6966.002.patch, 
> YARN-6966.003.patch, YARN-6966.004.patch
>
>
> Just as YARN-6212. However, I think it is not a duplicate of YARN-3933.
> The primary cause of negative values is that metrics do not recover properly 
> when NM restart.
> AllocatedContainers,ContainersLaunched,AllocatedGB,AvailableGB,AllocatedVCores,AvailableVCores
>  in metrics also need to recover when NM restart.
> This should be done in ContainerManagerImpl#recoverContainer.
> The scenario could be reproduction by the following steps:
> # Make sure 
> YarnConfiguration.NM_RECOVERY_ENABLED=true,YarnConfiguration.NM_RECOVERY_SUPERVISED=true
>  in NM
> # Submit an application and keep running
> # Restart NM
> # Stop the application
> # Now you get the negative values
> {code}
> /jmx?qry=Hadoop:service=NodeManager,name=NodeManagerMetrics
> {code}
> {code}
> {
> name: "Hadoop:service=NodeManager,name=NodeManagerMetrics",
> modelerType: "NodeManagerMetrics",
> tag.Context: "yarn",
> tag.Hostname: "hadoop.com",
> ContainersLaunched: 0,
> ContainersCompleted: 0,
> ContainersFailed: 2,
> ContainersKilled: 0,
> ContainersIniting: 0,
> ContainersRunning: 0,
> AllocatedGB: 0,
> AllocatedContainers: -2,
> AvailableGB: 160,
> AllocatedVCores: -11,
> AvailableVCores: 3611,
> ContainerLaunchDurationNumOps: 2,
> ContainerLaunchDurationAvgTime: 6,
> BadLocalDirs: 0,
> BadLogDirs: 0,
> GoodLocalDirsDiskUtilizationPerc: 2,
> GoodLogDirsDiskUtilizationPerc: 2
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6966) NodeManager metrics may return wrong negative values when NM restart

2018-06-21 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519536#comment-16519536
 ] 

Szilard Nemeth commented on YARN-6966:
--

Hi [~fly_in_gis]!

See my updated patch.

I wanted to pass the failing testcase 
(TestContainerManagerRecovery#testNodeManagerMetricsRecovery) first.

Actually, a quick workaround to send a container update event with some 
resource and check the metrics after NM recovery if they are matching.

Then I realized it should work out of the box, so when a container is created, 
we need to save its resource requests to the NM state store.

This piece is missing from the current implementation, so I extended 
ContainerManagerImpl.startContainerInternal() with this and adjusted the tests 
accordingly.

[~wilfreds]: Could you please have a look and check whether this makes sense?

> NodeManager metrics may return wrong negative values when NM restart
> 
>
> Key: YARN-6966
> URL: https://issues.apache.org/jira/browse/YARN-6966
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
> Attachments: YARN-6966.001.patch, YARN-6966.002.patch, 
> YARN-6966.003.patch
>
>
> Just as YARN-6212. However, I think it is not a duplicate of YARN-3933.
> The primary cause of negative values is that metrics do not recover properly 
> when NM restart.
> AllocatedContainers,ContainersLaunched,AllocatedGB,AvailableGB,AllocatedVCores,AvailableVCores
>  in metrics also need to recover when NM restart.
> This should be done in ContainerManagerImpl#recoverContainer.
> The scenario could be reproduction by the following steps:
> # Make sure 
> YarnConfiguration.NM_RECOVERY_ENABLED=true,YarnConfiguration.NM_RECOVERY_SUPERVISED=true
>  in NM
> # Submit an application and keep running
> # Restart NM
> # Stop the application
> # Now you get the negative values
> {code}
> /jmx?qry=Hadoop:service=NodeManager,name=NodeManagerMetrics
> {code}
> {code}
> {
> name: "Hadoop:service=NodeManager,name=NodeManagerMetrics",
> modelerType: "NodeManagerMetrics",
> tag.Context: "yarn",
> tag.Hostname: "hadoop.com",
> ContainersLaunched: 0,
> ContainersCompleted: 0,
> ContainersFailed: 2,
> ContainersKilled: 0,
> ContainersIniting: 0,
> ContainersRunning: 0,
> AllocatedGB: 0,
> AllocatedContainers: -2,
> AvailableGB: 160,
> AllocatedVCores: -11,
> AvailableVCores: 3611,
> ContainerLaunchDurationNumOps: 2,
> ContainerLaunchDurationAvgTime: 6,
> BadLocalDirs: 0,
> BadLogDirs: 0,
> GoodLocalDirsDiskUtilizationPerc: 2,
> GoodLogDirsDiskUtilizationPerc: 2
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8445) YARN native service doesn't allow service name equals to component name

2018-06-21 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519527#comment-16519527
 ] 

Hudson commented on YARN-8445:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #14459 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/14459/])
YARN-8445.  Improved error message for duplicated service and component (eyang: 
rev 9f15483c5d7c94251f4c84e0155449188f202779)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/ServiceApiUtil.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/exceptions/RestApiErrorMessages.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/test/java/org/apache/hadoop/yarn/service/TestServiceApiUtil.java


> YARN native service doesn't allow service name equals to component name
> ---
>
> Key: YARN-8445
> URL: https://issues.apache.org/jira/browse/YARN-8445
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: YARN-8445.001.patch
>
>
> Now YARN service doesn't allow specifying service name equals to component 
> name.
> And it causes AM launch fails with msg like:
> {code} 
> org.apache.hadoop.metrics2.MetricsException: Metrics source tf-zeppelin 
> already exists!
>  at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
>  at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
>  at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
>  at 
> org.apache.hadoop.yarn.service.ServiceMetrics.register(ServiceMetrics.java:75)
>  at 
> org.apache.hadoop.yarn.service.component.Component.(Component.java:193)
>  at 
> org.apache.hadoop.yarn.service.ServiceScheduler.createAllComponents(ServiceScheduler.java:552)
>  at 
> org.apache.hadoop.yarn.service.ServiceScheduler.buildInstance(ServiceScheduler.java:251)
>  at 
> org.apache.hadoop.yarn.service.ServiceScheduler.serviceInit(ServiceScheduler.java:283)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>  at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>  at 
> org.apache.hadoop.yarn.service.ServiceMaster.serviceInit(ServiceMaster.java:142)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>  at org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:338)
> 2018-06-18 06:50:39,473 [main] INFO service.ServiceScheduler - Stopping 
> service scheduler
> {code}
> It's better to add this check in validation phase instead of failing AM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7300) DiskValidator is not used in LocalDirAllocator

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-7300:


Assignee: Szilard Nemeth

> DiskValidator is not used in LocalDirAllocator
> --
>
> Key: YARN-7300
> URL: https://issues.apache.org/jira/browse/YARN-7300
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Haibo Chen
>Assignee: Szilard Nemeth
>Priority: Major
>
> HADOOP-13254 introduced a pluggable disk validator to replace 
> DiskChecker.checkDir(). However, LocalDirAllocator still references the old 
> DiskChecker.checkDir(). It'd be nice to
> use the plugin uniformly so that user configurations take effect in all 
> places.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7514) TestAggregatedLogDeletionService.testRefreshLogRetentionSettings is flaky

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-7514:


Assignee: Szilard Nemeth

> TestAggregatedLogDeletionService.testRefreshLogRetentionSettings is flaky
> -
>
> Key: YARN-7514
> URL: https://issues.apache.org/jira/browse/YARN-7514
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: log-aggregation
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Szilard Nemeth
>Priority: Major
>
> TestAggregatedLogDeletionService.testRefreshLogRetentionSettings fails 
> occasionally with 
> *Error Message*
> Argument(s) are different! Wanted:
> fileSystem.delete(
> mockfs://foo/tmp/logs/me/logs/application_1510201418065_0002,
> true
> );
> -> at 
> org.apache.hadoop.yarn.logaggregation.TestAggregatedLogDeletionService.testRefreshLogRetentionSettings(TestAggregatedLogDeletionService.java:300)
> Actual invocation has different arguments:
> fileSystem.delete(
> mockfs://foo/tmp/logs/me/logs/application_1510201418024_0001,
> true
> );
> -> at org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:252)
> *Stacktrace*
> org.mockito.exceptions.verification.junit.ArgumentsAreDifferent: 
> Argument(s) are different! Wanted:
> fileSystem.delete(
> mockfs://foo/tmp/logs/me/logs/application_1510201418065_0002,
> true
> );
> -> at 
> org.apache.hadoop.yarn.logaggregation.TestAggregatedLogDeletionService.testRefreshLogRetentionSettings(TestAggregatedLogDeletionService.java:300)
> Actual invocation has different arguments:
> fileSystem.delete(
> mockfs://foo/tmp/logs/me/logs/application_1510201418024_0001,
> true
> );
> -> at org.apache.hadoop.fs.FilterFileSystem.delete(FilterFileSystem.java:252)
>   at 
> org.apache.hadoop.yarn.logaggregation.TestAggregatedLogDeletionService.testRefreshLogRetentionSettings(TestAggregatedLogDeletionService.java:300)
> *Standard Output*
> 2017-11-08 20:23:38,138 INFO  [Timer-0] 
> logaggregation.AggregatedLogDeletionService 
> (AggregatedLogDeletionService.java:run(79)) - aggregated log deletion started.
> 2017-11-08 20:23:38,146 INFO  [Timer-0] 
> logaggregation.AggregatedLogDeletionService 
> (AggregatedLogDeletionService.java:deleteOldLogDirsFrom(106)) - Deleting 
> aggregated logs in 
> mockfs://foo/tmp/logs/me/logs/application_1510201418024_0001
> 2017-11-08 20:23:38,146 INFO  [Timer-0] 
> logaggregation.AggregatedLogDeletionService 
> (AggregatedLogDeletionService.java:run(92)) - aggregated log deletion 
> finished.
> 2017-11-08 20:23:38,167 INFO  [Timer-1] 
> logaggregation.AggregatedLogDeletionService 
> (AggregatedLogDeletionService.java:run(79)) - aggregated log deletion started.
> 2017-11-08 20:23:38,172 INFO  [Timer-1] 
> logaggregation.AggregatedLogDeletionService 
> (AggregatedLogDeletionService.java:deleteOldLogDirsFrom(106)) - Deleting 
> aggregated logs in 
> mockfs://foo/tmp/logs/me/logs/application_1510201418024_0001
> 2017-11-08 20:23:38,173 INFO  [Timer-1] 
> logaggregation.AggregatedLogDeletionService 
> (AggregatedLogDeletionService.java:deleteOldLogDirsFrom(106)) - Deleting 
> aggregated logs in 
> mockfs://foo/tmp/logs/me/logs/application_1510201418065_0002
> 2017-11-08 20:23:38,181 INFO  [Timer-1] 
> logaggregation.AggregatedLogDeletionService 
> (AggregatedLogDeletionService.java:run(92)) - aggregated log deletion 
> finished.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7748) TestContainerResizing.testIncreaseContainerUnreservedWhenApplicationCompleted failed

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-7748:


Assignee: Szilard Nemeth

> TestContainerResizing.testIncreaseContainerUnreservedWhenApplicationCompleted 
> failed
> 
>
> Key: YARN-7748
> URL: https://issues.apache.org/jira/browse/YARN-7748
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacityscheduler
>Affects Versions: 3.0.0
>Reporter: Haibo Chen
>Assignee: Szilard Nemeth
>Priority: Major
>
> TestContainerResizing.testIncreaseContainerUnreservedWhenApplicationCompleted
> Failing for the past 1 build (Since Failed#19244 )
> Took 0.4 sec.
> *Error Message*
> expected null, but 
> was:
> *Stacktrace*
> {code}
> java.lang.AssertionError: expected null, but 
> was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotNull(Assert.java:664)
>   at org.junit.Assert.assertNull(Assert.java:646)
>   at org.junit.Assert.assertNull(Assert.java:656)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing.testIncreaseContainerUnreservedWhenApplicationCompleted(TestContainerResizing.java:826)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:369)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:275)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:239)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:160)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:373)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:334)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:119)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:407)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-7456) TestAMRMClient.testAMRMClientWithContainerResourceChange[0] failed

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth reassigned YARN-7456:


Assignee: Szilard Nemeth

> TestAMRMClient.testAMRMClientWithContainerResourceChange[0] failed
> --
>
> Key: YARN-7456
> URL: https://issues.apache.org/jira/browse/YARN-7456
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-beta1
>Reporter: Haibo Chen
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: std.out
>
>
> *Error Message*
> expected:<1> but was:<0>
> *Stacktrace*
> java.lang.AssertionError: expected:<1> but was:<0>
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.doContainerResourceChange(TestAMRMClient.java:1150)
>   at 
> org.apache.hadoop.yarn.client.api.impl.TestAMRMClient.testAMRMClientWithContainerResourceChange(TestAMRMClient.java:1025)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8445) YARN native service doesn't allow service name equals to component name

2018-06-21 Thread Eric Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519484#comment-16519484
 ] 

Eric Yang commented on YARN-8445:
-

+1 looks good to me.  

> YARN native service doesn't allow service name equals to component name
> ---
>
> Key: YARN-8445
> URL: https://issues.apache.org/jira/browse/YARN-8445
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Chandni Singh
>Assignee: Chandni Singh
>Priority: Major
> Fix For: 3.1.1
>
> Attachments: YARN-8445.001.patch
>
>
> Now YARN service doesn't allow specifying service name equals to component 
> name.
> And it causes AM launch fails with msg like:
> {code} 
> org.apache.hadoop.metrics2.MetricsException: Metrics source tf-zeppelin 
> already exists!
>  at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.newSourceName(DefaultMetricsSystem.java:152)
>  at 
> org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.sourceName(DefaultMetricsSystem.java:125)
>  at 
> org.apache.hadoop.metrics2.impl.MetricsSystemImpl.register(MetricsSystemImpl.java:229)
>  at 
> org.apache.hadoop.yarn.service.ServiceMetrics.register(ServiceMetrics.java:75)
>  at 
> org.apache.hadoop.yarn.service.component.Component.(Component.java:193)
>  at 
> org.apache.hadoop.yarn.service.ServiceScheduler.createAllComponents(ServiceScheduler.java:552)
>  at 
> org.apache.hadoop.yarn.service.ServiceScheduler.buildInstance(ServiceScheduler.java:251)
>  at 
> org.apache.hadoop.yarn.service.ServiceScheduler.serviceInit(ServiceScheduler.java:283)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>  at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
>  at 
> org.apache.hadoop.yarn.service.ServiceMaster.serviceInit(ServiceMaster.java:142)
>  at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>  at org.apache.hadoop.yarn.service.ServiceMaster.main(ServiceMaster.java:338)
> 2018-06-18 06:50:39,473 [main] INFO service.ServiceScheduler - Stopping 
> service scheduler
> {code}
> It's better to add this check in validation phase instead of failing AM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6966) NodeManager metrics may return wrong negative values when NM restart

2018-06-21 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519408#comment-16519408
 ] 

Szilard Nemeth commented on YARN-6966:
--

Hi [~fly_in_gis]!

Do you mind if I take this over as I want this to be merged soon?

Moreover, the testcase fails so I have a fix for that and I see some cases 
where I can extend your patch.

Thanks!

> NodeManager metrics may return wrong negative values when NM restart
> 
>
> Key: YARN-6966
> URL: https://issues.apache.org/jira/browse/YARN-6966
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
> Attachments: YARN-6966.001.patch, YARN-6966.002.patch, 
> YARN-6966.003.patch
>
>
> Just as YARN-6212. However, I think it is not a duplicate of YARN-3933.
> The primary cause of negative values is that metrics do not recover properly 
> when NM restart.
> AllocatedContainers,ContainersLaunched,AllocatedGB,AvailableGB,AllocatedVCores,AvailableVCores
>  in metrics also need to recover when NM restart.
> This should be done in ContainerManagerImpl#recoverContainer.
> The scenario could be reproduction by the following steps:
> # Make sure 
> YarnConfiguration.NM_RECOVERY_ENABLED=true,YarnConfiguration.NM_RECOVERY_SUPERVISED=true
>  in NM
> # Submit an application and keep running
> # Restart NM
> # Stop the application
> # Now you get the negative values
> {code}
> /jmx?qry=Hadoop:service=NodeManager,name=NodeManagerMetrics
> {code}
> {code}
> {
> name: "Hadoop:service=NodeManager,name=NodeManagerMetrics",
> modelerType: "NodeManagerMetrics",
> tag.Context: "yarn",
> tag.Hostname: "hadoop.com",
> ContainersLaunched: 0,
> ContainersCompleted: 0,
> ContainersFailed: 2,
> ContainersKilled: 0,
> ContainersIniting: 0,
> ContainersRunning: 0,
> AllocatedGB: 0,
> AllocatedContainers: -2,
> AvailableGB: 160,
> AllocatedVCores: -11,
> AvailableVCores: 3611,
> ContainerLaunchDurationNumOps: 2,
> ContainerLaunchDurationAvgTime: 6,
> BadLocalDirs: 0,
> BadLogDirs: 0,
> GoodLocalDirsDiskUtilizationPerc: 2,
> GoodLogDirsDiskUtilizationPerc: 2
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7451) Add missing tests to verify the presence of custom resources of RM apps and scheduler webservice endpoints

2018-06-21 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519352#comment-16519352
 ] 

genericqa commented on YARN-7451:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
 8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  7s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}131m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-7451 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928611/YARN-7451.015.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ef145816049b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 43541a1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/21072/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21072/testReport/ |
| Max. process+thread count | 875 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

[jira] [Commented] (YARN-8103) Add CLI interface to query node attributes

2018-06-21 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519335#comment-16519335
 ] 

genericqa commented on YARN-8103:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-3409 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
49s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m  
2s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
55s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
25s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 41s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
10s{color} | {color:green} YARN-3409 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
54s{color} | {color:green} YARN-3409 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 27m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
3m 25s{color} | {color:orange} root: The patch generated 7 new + 513 unchanged 
- 29 fixed = 520 total (was 542) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  7m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
24s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
15s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 34s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 28s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 27s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hadoop-yarn-server-common in the patch 

[jira] [Commented] (YARN-8438) TestContainer.testKillOnNew flaky on trunk

2018-06-21 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519313#comment-16519313
 ] 

genericqa commented on YARN-8438:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 20s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
25s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m  
0s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 98m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8438 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928614/YARN-8438.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8cb6e91ec48a 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 
11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 43541a1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21073/testReport/ |
| Max. process+thread count | 408 (vs. ulimit of 1) |
| modules | C: 

[jira] [Commented] (YARN-8438) TestContainer.testKillOnNew flaky on trunk

2018-06-21 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519245#comment-16519245
 ] 

Szilard Nemeth commented on YARN-8438:
--

Hi [~szegedim]!

Thanks for your comments.

Indeed, the name you suggested is better so I changed it.

It's a good point what you have brought up, if I get it right you meant 
thread-safety issues, for that simply having the getTime in a synchronized 
block solves it. Unfortunately, I could not change the method to be 
synchronized as it would not compile as this is an interface method.

 

 

> TestContainer.testKillOnNew flaky on trunk
> --
>
> Key: YARN-8438
> URL: https://issues.apache.org/jira/browse/YARN-8438
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8438.001.patch, YARN-8438.002.patch
>
>
> Running this test several times (e.g. 30), it fails ~5-10 times.
> Stacktrace: 
> {code:java}
> java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.junit.Assert.assertTrue(Assert.java:52) at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.TestContainer.testKillOnNew(TestContainer.java:594)
> {code}
> TestContainer:594 is the following code in trunk, currently:
> {code:java}
> Assert.assertTrue( containerMetrics.finishTime.value() > 
> containerMetrics.startTime .value());
> {code}
> So sometimes the finish time is not greater than the start time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8438) TestContainer.testKillOnNew flaky on trunk

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8438:
-
Attachment: YARN-8438.002.patch

> TestContainer.testKillOnNew flaky on trunk
> --
>
> Key: YARN-8438
> URL: https://issues.apache.org/jira/browse/YARN-8438
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8438.001.patch, YARN-8438.002.patch
>
>
> Running this test several times (e.g. 30), it fails ~5-10 times.
> Stacktrace: 
> {code:java}
> java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.junit.Assert.assertTrue(Assert.java:52) at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.TestContainer.testKillOnNew(TestContainer.java:594)
> {code}
> TestContainer:594 is the following code in trunk, currently:
> {code:java}
> Assert.assertTrue( containerMetrics.finishTime.value() > 
> containerMetrics.startTime .value());
> {code}
> So sometimes the finish time is not greater than the start time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8438) TestContainer.testKillOnNew flaky on trunk

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8438:
-
Attachment: (was: YARN-8438.002.patch)

> TestContainer.testKillOnNew flaky on trunk
> --
>
> Key: YARN-8438
> URL: https://issues.apache.org/jira/browse/YARN-8438
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8438.001.patch
>
>
> Running this test several times (e.g. 30), it fails ~5-10 times.
> Stacktrace: 
> {code:java}
> java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.junit.Assert.assertTrue(Assert.java:52) at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.TestContainer.testKillOnNew(TestContainer.java:594)
> {code}
> TestContainer:594 is the following code in trunk, currently:
> {code:java}
> Assert.assertTrue( containerMetrics.finishTime.value() > 
> containerMetrics.startTime .value());
> {code}
> So sometimes the finish time is not greater than the start time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8438) TestContainer.testKillOnNew flaky on trunk

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-8438:
-
Attachment: YARN-8438.002.patch

> TestContainer.testKillOnNew flaky on trunk
> --
>
> Key: YARN-8438
> URL: https://issues.apache.org/jira/browse/YARN-8438
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Szilard Nemeth
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-8438.001.patch, YARN-8438.002.patch
>
>
> Running this test several times (e.g. 30), it fails ~5-10 times.
> Stacktrace: 
> {code:java}
> java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> org.junit.Assert.assertTrue(Assert.java:52) at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.container.TestContainer.testKillOnNew(TestContainer.java:594)
> {code}
> TestContainer:594 is the following code in trunk, currently:
> {code:java}
> Assert.assertTrue( containerMetrics.finishTime.value() > 
> containerMetrics.startTime .value());
> {code}
> So sometimes the finish time is not greater than the start time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-7451) Add missing tests to verify the presence of custom resources of RM apps and scheduler webservice endpoints

2018-06-21 Thread Szilard Nemeth (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519218#comment-16519218
 ] 

Szilard Nemeth commented on YARN-7451:
--

Hi [~rkanter]!

Thanks for your comments.
 # Test failure is a known flaky test issue, findbugs was still a known one and 
it should disappear with my new patch as it was fixed by me.
 # Imports are fixed
 # Good catch, I think the static blocks just remained there accidentally.

> Add missing tests to verify the presence of custom resources of RM apps and 
> scheduler webservice endpoints
> --
>
> Key: YARN-7451
> URL: https://issues.apache.org/jira/browse/YARN-7451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-7451.001.patch, YARN-7451.002.patch, 
> YARN-7451.003.patch, YARN-7451.004.patch, YARN-7451.005.patch, 
> YARN-7451.006.patch, YARN-7451.007.patch, YARN-7451.008.patch, 
> YARN-7451.009.patch, YARN-7451.010.patch, YARN-7451.011.patch, 
> YARN-7451.012.patch, YARN-7451.013.patch, YARN-7451.014.patch, 
> YARN-7451.015.patch, 
> YARN-7451__Expose_custom_resource_types_on_RM_scheduler_API_as_flattened_map01_02.patch
>
>
>  
> Originally, this issue was about serializing custom resources along with 
> normal resources in the RM apps and scheduler webservice endpoints.
> However, as YARN-7817 implemented this sooner, this issue is a complement of 
> YARN-7817, adding several unit tests to verify the correctness of the 
> responses of these webservice endpoints.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-7451) Add missing tests to verify the presence of custom resources of RM apps and scheduler webservice endpoints

2018-06-21 Thread Szilard Nemeth (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szilard Nemeth updated YARN-7451:
-
Attachment: YARN-7451.015.patch

> Add missing tests to verify the presence of custom resources of RM apps and 
> scheduler webservice endpoints
> --
>
> Key: YARN-7451
> URL: https://issues.apache.org/jira/browse/YARN-7451
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager, restapi
>Affects Versions: 3.0.0
>Reporter: Grant Sohn
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: YARN-7451.001.patch, YARN-7451.002.patch, 
> YARN-7451.003.patch, YARN-7451.004.patch, YARN-7451.005.patch, 
> YARN-7451.006.patch, YARN-7451.007.patch, YARN-7451.008.patch, 
> YARN-7451.009.patch, YARN-7451.010.patch, YARN-7451.011.patch, 
> YARN-7451.012.patch, YARN-7451.013.patch, YARN-7451.014.patch, 
> YARN-7451.015.patch, 
> YARN-7451__Expose_custom_resource_types_on_RM_scheduler_API_as_flattened_map01_02.patch
>
>
>  
> Originally, this issue was about serializing custom resources along with 
> normal resources in the RM apps and scheduler webservice endpoints.
> However, as YARN-7817 implemented this sooner, this issue is a complement of 
> YARN-7817, adding several unit tests to verify the correctness of the 
> responses of these webservice endpoints.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-8103) Add CLI interface to query node attributes

2018-06-21 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519051#comment-16519051
 ] 

Bibin A Chundatt edited comment on YARN-8103 at 6/21/18 8:10 AM:
-

[~Naganarasimha]

Attached updated patch with new line addition for NodeCLI too..

Additional fixes:

# NodeAttributekey and NodeAttribute equals fix



was (Author: bibinchundatt):
[~Naganarasimha]

Attached updated patch with new line addition for NodeCLI too..

> Add CLI interface to  query node attributes
> ---
>
> Key: YARN-8103
> URL: https://issues.apache.org/jira/browse/YARN-8103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8103-YARN-3409.001.patch, 
> YARN-8103-YARN-3409.002.patch, YARN-8103-YARN-3409.003.patch, 
> YARN-8103-YARN-3409.004.patch, YARN-8103-YARN-3409.005.patch, 
> YARN-8103-YARN-3409.WIP.patch
>
>
> YARN-8100 will add API interface for querying the attributes. CLI interface 
> for querying node attributes for each nodes and list all attributes in 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8103) Add CLI interface to query node attributes

2018-06-21 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519051#comment-16519051
 ] 

Bibin A Chundatt commented on YARN-8103:


[~Naganarasimha]

Attached updated patch with new line addition for NodeCLI too..

> Add CLI interface to  query node attributes
> ---
>
> Key: YARN-8103
> URL: https://issues.apache.org/jira/browse/YARN-8103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8103-YARN-3409.001.patch, 
> YARN-8103-YARN-3409.002.patch, YARN-8103-YARN-3409.003.patch, 
> YARN-8103-YARN-3409.004.patch, YARN-8103-YARN-3409.005.patch, 
> YARN-8103-YARN-3409.WIP.patch
>
>
> YARN-8100 will add API interface for querying the attributes. CLI interface 
> for querying node attributes for each nodes and list all attributes in 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8103) Add CLI interface to query node attributes

2018-06-21 Thread Bibin A Chundatt (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-8103:
---
Attachment: YARN-8103-YARN-3409.005.patch

> Add CLI interface to  query node attributes
> ---
>
> Key: YARN-8103
> URL: https://issues.apache.org/jira/browse/YARN-8103
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Major
> Attachments: YARN-8103-YARN-3409.001.patch, 
> YARN-8103-YARN-3409.002.patch, YARN-8103-YARN-3409.003.patch, 
> YARN-8103-YARN-3409.004.patch, YARN-8103-YARN-3409.005.patch, 
> YARN-8103-YARN-3409.WIP.patch
>
>
> YARN-8100 will add API interface for querying the attributes. CLI interface 
> for querying node attributes for each nodes and list all attributes in 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8446) Support of managing multi-dimensional resources

2018-06-21 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8446:
--
Description: 
To better support long running jobs and services, we need to extend YARN to 
support other resources, such as disk, IP, port. Current resource types is not 
flexible enough to make this work because it only supports COUNTABLE type which 
is single value.

 

Propose to extend resource types by adding two more general types, such as SET, 
MULTIDIMENSIONAL (naming TBD). With schema like

*SET*:  a set of values

{noformat}

["10.100.0.1", "10.100.0.2"]

["9981", "9982", "9983"]

{noformat}

*MULTIDIMENSIONAL*: a set of values, each value can be a resource instance with 
multiple values. 

{noformat}

[ disk1 : \{ attributes: { "type" : "SATA", "index" : 1 },  "size" : "500gb", 
"iops" : "1000" },

  disk2 : \{ attributes: { "type" : "SSD", "index" :   2 },  "size" : "100gb", 
"iops" : "1000" } ]

{noformat}

this way, we could support better resource management and isolations. The idea 
is to make this as general as possible so we can easily support some other 
complex resources.

 

  was:
To better support long running jobs and services, we need to extend YARN to 
support other resources, such as disk, IP, port. Current resource types is not 
flexible enough to make this work because it only supports COUNTABLE type which 
is single value.

 

Propose to extend resource types by adding two more general types, such as SET, 
MULTIDIMENSIONAL (naming TBD). With schema like

*SET*:  a set of values

e.g

["10.100.0.1", "10.100.0.2"]

["9981", "9982", "9983"]

*MULTIDIMENSIONAL*: a set of values, each value can be a resource instance with 
multiple values. 

[ disk1 : \{ attributes: { "type" : "SATA", "index" : 1 },  "size" : "500gb", 
"iops" : "1000" },

 disk2 : \{ attributes:  { "type" : "SSD", "index" :   2 },  "size" : "100gb", 
"iops" : "1000" } ]

this way, we could support better resource management and isolations. The idea 
is to make this as general as possible so we can easily support some other 
complex resources.

 


> Support of managing multi-dimensional resources
> ---
>
> Key: YARN-8446
> URL: https://issues.apache.org/jira/browse/YARN-8446
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Weiwei Yang
>Priority: Major
>
> To better support long running jobs and services, we need to extend YARN to 
> support other resources, such as disk, IP, port. Current resource types is 
> not flexible enough to make this work because it only supports COUNTABLE type 
> which is single value.
>  
> Propose to extend resource types by adding two more general types, such as 
> SET, MULTIDIMENSIONAL (naming TBD). With schema like
> *SET*:  a set of values
> {noformat}
> ["10.100.0.1", "10.100.0.2"]
> ["9981", "9982", "9983"]
> {noformat}
> *MULTIDIMENSIONAL*: a set of values, each value can be a resource instance 
> with multiple values. 
> {noformat}
> [ disk1 : \{ attributes: { "type" : "SATA", "index" : 1 },  "size" : "500gb", 
> "iops" : "1000" },
>   disk2 : \{ attributes: { "type" : "SSD", "index" :   2 },  "size" : 
> "100gb", "iops" : "1000" } ]
> {noformat}
> this way, we could support better resource management and isolations. The 
> idea is to make this as general as possible so we can easily support some 
> other complex resources.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8446) Support of managing multi-dimensional resources

2018-06-21 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8446:
--
Description: 
To better support long running jobs and services, we need to extend YARN to 
support other resources, such as disk, IP, port. Current resource types is not 
flexible enough to make this work because it only supports COUNTABLE type which 
is single value.

 

Propose to extend resource types by adding two more general types, such as SET, 
MULTIDIMENSIONAL (naming TBD). With schema like

*SET*:  a set of values
{noformat}
["10.100.0.1", "10.100.0.2"]
["9981", "9982", "9983"]
{noformat}
*MULTIDIMENSIONAL*: a set of values, each value can be a resource instance with 
multiple values. 
{noformat}
[ disk1 : { attributes: { "type" : "SATA", "index" : 1 },  "size" : "500gb", 
"iops" : "1000" },

  disk2 : { attributes: { "type" : "SSD", "index" : 2 },  "size" : "100gb", 
"iops" : "1000" } ]
{noformat}
this way, we could support better resource management and isolations. The idea 
is to make this as general as possible so we can easily support some other 
complex resources.

 

  was:
To better support long running jobs and services, we need to extend YARN to 
support other resources, such as disk, IP, port. Current resource types is not 
flexible enough to make this work because it only supports COUNTABLE type which 
is single value.

 

Propose to extend resource types by adding two more general types, such as SET, 
MULTIDIMENSIONAL (naming TBD). With schema like

*SET*:  a set of values

{noformat}

["10.100.0.1", "10.100.0.2"]

["9981", "9982", "9983"]

{noformat}

*MULTIDIMENSIONAL*: a set of values, each value can be a resource instance with 
multiple values. 

{noformat}

[ disk1 : \{ attributes: { "type" : "SATA", "index" : 1 },  "size" : "500gb", 
"iops" : "1000" },

  disk2 : \{ attributes: { "type" : "SSD", "index" :   2 },  "size" : "100gb", 
"iops" : "1000" } ]

{noformat}

this way, we could support better resource management and isolations. The idea 
is to make this as general as possible so we can easily support some other 
complex resources.

 


> Support of managing multi-dimensional resources
> ---
>
> Key: YARN-8446
> URL: https://issues.apache.org/jira/browse/YARN-8446
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Weiwei Yang
>Priority: Major
>
> To better support long running jobs and services, we need to extend YARN to 
> support other resources, such as disk, IP, port. Current resource types is 
> not flexible enough to make this work because it only supports COUNTABLE type 
> which is single value.
>  
> Propose to extend resource types by adding two more general types, such as 
> SET, MULTIDIMENSIONAL (naming TBD). With schema like
> *SET*:  a set of values
> {noformat}
> ["10.100.0.1", "10.100.0.2"]
> ["9981", "9982", "9983"]
> {noformat}
> *MULTIDIMENSIONAL*: a set of values, each value can be a resource instance 
> with multiple values. 
> {noformat}
> [ disk1 : { attributes: { "type" : "SATA", "index" : 1 },  "size" : "500gb", 
> "iops" : "1000" },
>   disk2 : { attributes: { "type" : "SSD", "index" : 2 },  "size" : "100gb", 
> "iops" : "1000" } ]
> {noformat}
> this way, we could support better resource management and isolations. The 
> idea is to make this as general as possible so we can easily support some 
> other complex resources.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-8446) Support of managing multi-dimensional resources

2018-06-21 Thread Weiwei Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/YARN-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Weiwei Yang updated YARN-8446:
--
Description: 
To better support long running jobs and services, we need to extend YARN to 
support other resources, such as disk, IP, port. Current resource types is not 
flexible enough to make this work because it only supports COUNTABLE type which 
is single value.

 

Propose to extend resource types by adding two more general types, such as SET, 
MULTIDIMENSIONAL (naming TBD). With schema like

*SET*:  a set of values

e.g

["10.100.0.1", "10.100.0.2"]

["9981", "9982", "9983"]

*MULTIDIMENSIONAL*: a set of values, each value can be a resource instance with 
multiple values. 

[ disk1 : \{ attributes: { "type" : "SATA", "index" : 1 },  "size" : "500gb", 
"iops" : "1000" },

 disk2 : \{ attributes:  { "type" : "SSD", "index" :   2 },  "size" : "100gb", 
"iops" : "1000" } ]

this way, we could support better resource management and isolations. The idea 
is to make this as general as possible so we can easily support some other 
complex resources.

 

  was:
To better support long running jobs and services, we need to extend YARN to 
support other resources, such as disk, IP, port. Current resource types is not 
flexible enough to make this work because it only supports COUNTABLE type which 
is single value.

 

Propose to extend resource types by adding two more general types, such as SET, 
MULTIDIMENSIONAL (naming TBD). With schema like

*SET*:  a set of values

e.g

["10.100.0.1", "10.100.0.2"]

["9981", "9982", "9983"]

*MULTIDIMENSIONAL*: a set of values, each value can be a resource instance with 
multiple values. 

[ disk1 : \{ attributes: { "type" : "SATA", "index" : 1 },  "size" : "500gb", 
"iops" : "1000" },

disk2 : \{ attributes: { "type" : "SSD", "index" : 2 },  "size" : "100gb", 
"iops" : "1000" } ]

this way, we could support better resource management and isolations. The idea 
is to make this as general as possible so we can easily support some other 
complex resources.

 


> Support of managing multi-dimensional resources
> ---
>
> Key: YARN-8446
> URL: https://issues.apache.org/jira/browse/YARN-8446
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Weiwei Yang
>Priority: Major
>
> To better support long running jobs and services, we need to extend YARN to 
> support other resources, such as disk, IP, port. Current resource types is 
> not flexible enough to make this work because it only supports COUNTABLE type 
> which is single value.
>  
> Propose to extend resource types by adding two more general types, such as 
> SET, MULTIDIMENSIONAL (naming TBD). With schema like
> *SET*:  a set of values
> e.g
> ["10.100.0.1", "10.100.0.2"]
> ["9981", "9982", "9983"]
> *MULTIDIMENSIONAL*: a set of values, each value can be a resource instance 
> with multiple values. 
> [ disk1 : \{ attributes: { "type" : "SATA", "index" : 1 },  "size" : "500gb", 
> "iops" : "1000" },
>  disk2 : \{ attributes:  { "type" : "SSD", "index" :   2 },  "size" : 
> "100gb", "iops" : "1000" } ]
> this way, we could support better resource management and isolations. The 
> idea is to make this as general as possible so we can easily support some 
> other complex resources.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-8446) Support of managing multi-dimensional resources

2018-06-21 Thread Weiwei Yang (JIRA)
Weiwei Yang created YARN-8446:
-

 Summary: Support of managing multi-dimensional resources
 Key: YARN-8446
 URL: https://issues.apache.org/jira/browse/YARN-8446
 Project: Hadoop YARN
  Issue Type: New Feature
Reporter: Weiwei Yang


To better support long running jobs and services, we need to extend YARN to 
support other resources, such as disk, IP, port. Current resource types is not 
flexible enough to make this work because it only supports COUNTABLE type which 
is single value.

 

Propose to extend resource types by adding two more general types, such as SET, 
MULTIDIMENSIONAL (naming TBD). With schema like

*SET*:  a set of values

e.g

["10.100.0.1", "10.100.0.2"]

["9981", "9982", "9983"]

*MULTIDIMENSIONAL*: a set of values, each value can be a resource instance with 
multiple values. 

[ disk1 : \{ attributes: { "type" : "SATA", "index" : 1 },  "size" : "500gb", 
"iops" : "1000" },

disk2 : \{ attributes: { "type" : "SSD", "index" : 2 },  "size" : "100gb", 
"iops" : "1000" } ]

this way, we could support better resource management and isolations. The idea 
is to make this as general as possible so we can easily support some other 
complex resources.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8434) Nodemanager not registering to active RM in federation

2018-06-21 Thread Bibin A Chundatt (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16519002#comment-16519002
 ] 

Bibin A Chundatt commented on YARN-8434:


cc:  [~giovanni.fumarola]

> Nodemanager not registering to active RM in federation
> --
>
> Key: YARN-8434
> URL: https://issues.apache.org/jira/browse/YARN-8434
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Blocker
>
> FederationRMFailoverProxyProvider doesn't handle connecting to active RM. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-8435) NullPointerException when client first time connect to Yarn Router

2018-06-21 Thread genericqa (JIRA)


[ 
https://issues.apache.org/jira/browse/YARN-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16518950#comment-16518950
 ] 

genericqa commented on YARN-8435:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
43s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 38s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 20s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
19s{color} | {color:green} hadoop-yarn-server-router in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | YARN-8435 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12928563/YARN-8435.v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 1557adf1baa0 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 43541a1 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/21070/testReport/ |
| Max. process+thread count | 665 (vs. ulimit of 1) |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/21070/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NullPointerException when client first time connect to Yarn Router
>