[jira] [Updated] (YARN-6836) [ATS1/1.5] "IllegalStateException: connect in progress" while posting entity into TimelineServer

2017-07-17 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-6836:

Summary: [ATS1/1.5] "IllegalStateException: connect in progress" while 
posting entity into TimelineServer  (was: [ATS1/1.5] IllegalStateException 
while posting entity into TimelineServer)

> [ATS1/1.5] "IllegalStateException: connect in progress" while posting entity 
> into TimelineServer
> 
>
> Key: YARN-6836
> URL: https://issues.apache.org/jira/browse/YARN-6836
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineclient
>Reporter: Rohith Sharma K S
>
> It is observed that timelinecilent unable to post entities to timelineserver 
> and throw IllegalStateException.  
> {noformat}
> 2017-07-13 06:42:15,376 ERROR metrics.SystemMetricsPublisher 
> (SystemMetricsPublisher.java:putEntity(549)) - Error when publishing entity 
> [YARN_APPLICATION,application_1499926197597_0002]
> com.sun.jersey.api.client.ClientHandlerException: 
> java.lang.IllegalStateException: connect in progress
> at 
> com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
> at 
> org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:237)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6836) [ATS1/1.5] IllegalStateException while posting entity into TimelineServer

2017-07-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091118#comment-16091118
 ] 

Rohith Sharma K S commented on YARN-6836:
-

This behavior is seen in ResourceManager while publishing entities into 
timelineserver. The full exception trace as below
{noformat}
2017-07-13 06:42:15,376 ERROR metrics.SystemMetricsPublisher 
(SystemMetricsPublisher.java:putEntity(549)) - Error when publishing entity 
[YARN_APPLICATION,application_1499926197597_0002]
com.sun.jersey.api.client.ClientHandlerException: 
java.lang.IllegalStateException: connect in progress
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:237)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineClientConnectionRetry.retryOn(TimelineClientImpl.java:186)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter.handle(TimelineClientImpl.java:250)
at com.sun.jersey.api.client.Client.handle(Client.java:648)
at com.sun.jersey.api.client.WebResource.handle(WebResource.java:670)
at com.sun.jersey.api.client.WebResource.access$200(WebResource.java:74)
at 
com.sun.jersey.api.client.WebResource$Builder.post(WebResource.java:563)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPostingObject(TimelineWriter.java:154)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:115)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineWriter$1.run(TimelineWriter.java:112)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineWriter.doPosting(TimelineWriter.java:112)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineWriter.putEntities(TimelineWriter.java:92)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl.putEntities(TimelineClientImpl.java:348)
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher.putEntity(SystemMetricsPublisher.java:536)
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher.publishApplicationACLsUpdatedEvent(SystemMetricsPublisher.java:392)
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher.handleSystemMetricsEvent(SystemMetricsPublisher.java:257)
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler.handle(SystemMetricsPublisher.java:564)
at 
org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler.handle(SystemMetricsPublisher.java:559)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:184)
at 
org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalStateException: connect in progress
at 
sun.net.www.protocol.http.HttpURLConnection.setRequestMethod(HttpURLConnection.java:550)
at 
org.apache.hadoop.security.authentication.client.KerberosAuthenticator.authenticate(KerberosAuthenticator.java:187)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticator.authenticate(DelegationTokenAuthenticator.java:132)
at 
org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:216)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticatedURL.openConnection(DelegationTokenAuthenticatedURL.java:322)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineURLConnectionFactory.getHttpURLConnection(TimelineClientImpl.java:475)
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler._invoke(URLConnectionClientHandler.java:159)
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:147)
... 24 more
{noformat}

> [ATS1/1.5] IllegalStateException while posting entity into TimelineServer
> -
>
> Key: YARN-6836
> URL: https://issues.apache.org/jira/browse/YARN-6836
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineclient
>Reporter: Rohith Sharma K S
>
> It is observed that timelinecilent unable to post entities to timelineserver 
> and throw IllegalStateException.  
> {noformat}
> 2017-07-13 06:42:15,376 

[jira] [Created] (YARN-6836) [ATS1/1.5] IllegalStateException while posting entity into TimelineServer

2017-07-17 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-6836:
---

 Summary: [ATS1/1.5] IllegalStateException while posting entity 
into TimelineServer
 Key: YARN-6836
 URL: https://issues.apache.org/jira/browse/YARN-6836
 Project: Hadoop YARN
  Issue Type: Bug
  Components: timelineclient
Reporter: Rohith Sharma K S


It is observed that timelinecilent unable to post entities to timelineserver 
and throw IllegalStateException.  
{noformat}
2017-07-13 06:42:15,376 ERROR metrics.SystemMetricsPublisher 
(SystemMetricsPublisher.java:putEntity(549)) - Error when publishing entity 
[YARN_APPLICATION,application_1499926197597_0002]
com.sun.jersey.api.client.ClientHandlerException: 
java.lang.IllegalStateException: connect in progress
at 
com.sun.jersey.client.urlconnection.URLConnectionClientHandler.handle(URLConnectionClientHandler.java:149)
at 
org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl$TimelineJerseyRetryFilter$1.run(TimelineClientImpl.java:237)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091110#comment-16091110
 ] 

Hadoop QA commented on YARN-5892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 0s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
12s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} branch-2.8 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 35s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 8 new + 285 unchanged - 1 fixed = 293 total (was 286) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
39s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
19s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_131
 with JDK v1.8.0_131 generated 4 new + 973 unchanged - 0 fixed = 977 total (was 
973) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 11s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_131. 

[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091049#comment-16091049
 ] 

Rohith Sharma K S commented on YARN-6733:
-

bq. Actually, do you recollect, we all discussed this.
Ahh.. right.. lets keep as-is  and do the change in documentation.

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6777) Support for ApplicationMasterService processing chain of interceptors

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091026#comment-16091026
 ] 

Hadoop QA commented on YARN-6777:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 5 new + 234 unchanged - 0 fixed = 239 total (was 234) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 32s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
43s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m  0s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 58s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
|   | 
hadoop.yarn.server.resourcemanager.TestOpportunisticContainerAllocatorAMService 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6777 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877711/YARN-6777.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 273470d24292 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5b00792 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 

[jira] [Commented] (YARN-5947) Create LeveldbConfigurationStore class using Leveldb as backing store

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16091021#comment-16091021
 ] 

Hadoop QA commented on YARN-5947:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5734 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
54s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 4s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
22s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
52s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
16s{color} | {color:green} YARN-5734 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} YARN-5734 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 52s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 2 new + 288 unchanged - 0 fixed = 290 total (was 288) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 generated 5 new + 0 unchanged - 0 fixed = 5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
26s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 45m  2s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}104m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
|  |  Boxing/unboxing to parse a primitive 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.LeveldbConfigurationStore.initialize(Configuration,
 Configuration)  At 
LeveldbConfigurationStore.java:org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.LeveldbConfigurationStore.initialize(Configuration,
 Configuration)  At LeveldbConfigurationStore.java:[line 81] |
|  |  Found reliance on default encoding in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.LeveldbConfigurationStore.initialize(Configuration,
 

[jira] [Commented] (YARN-6835) Remove runningContainers from ContainerScheduler

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090953#comment-16090953
 ] 

Hadoop QA commented on YARN-6835:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 16s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 5 new + 94 unchanged - 1 fixed = 99 total (was 95) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 11s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 23s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestNodeManagerShutdown |
|   | 
hadoop.yarn.server.nodemanager.containermanager.monitor.TestContainersMonitor |
|   | hadoop.yarn.server.nodemanager.TestNodeManagerReboot |
|   | hadoop.yarn.server.nodemanager.containermanager.TestContainerManager |
|   | hadoop.yarn.server.nodemanager.TestNodeManagerResync |
|   | 
hadoop.yarn.server.nodemanager.containermanager.TestContainerManagerRecovery |
|   | hadoop.yarn.server.nodemanager.containermanager.container.TestContainer |
|   | 
hadoop.yarn.server.nodemanager.containermanager.launcher.TestContainerLaunch |
|   | 
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6835 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877706/YARN-6835.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 67ec05421dbb 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5b00792 |
| Default 

[jira] [Updated] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-17 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-5892:
-
Attachment: YARN-5892.branch-2.8.018.patch

bq. Replaced it with {{new ConcurrentHashMap().keySet("dummy")}}
Sigh. I need to be more careful about what is and is not in JDK 1.7. This 
method with this signature is also not in JDK1.7

Trying again with YARN-5892.branch-2.8.018.patch:
{{Collections.newSetFromMap(new ConcurrentHashMap());}}

[~sunilg], [~leftnoteasy], [~jlowe]: When the pre-commit build comes back 
clean, can you please have a look?


> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0-alpha3
>
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch, YARN-5892.branch-2.015.patch, 
> YARN-5892.branch-2.016.patch, YARN-5892.branch-2.8.016.patch, 
> YARN-5892.branch-2.8.017.patch, YARN-5892.branch-2.8.018.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6612) Update fair scheduler policies to be aware of resource types

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090908#comment-16090908
 ] 

Hadoop QA commented on YARN-6612:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} YARN-6612 does not apply to YARN-3926. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-6612 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877684/YARN-6612.YARN-3926.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16474/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update fair scheduler policies to be aware of resource types
> 
>
> Key: YARN-6612
> URL: https://issues.apache.org/jira/browse/YARN-6612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-6612.YARN-3926.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6777) Support for ApplicationMasterService processing chain of interceptors

2017-07-17 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6777:
--
Attachment: YARN-6777.004.patch

Thanks for the rev [~subru]. Updated the patch based on your suggestions.
* Simplified things a bit by removed the Interceptor, everything is now a 
processor.

> Support for ApplicationMasterService processing chain of interceptors
> -
>
> Key: YARN-6777
> URL: https://issues.apache.org/jira/browse/YARN-6777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6777.001.patch, YARN-6777.002.patch, 
> YARN-6777.003.patch, YARN-6777.004.patch
>
>
> This JIRA extends the Processor introduced in YARN-6776 with a configurable 
> processing chain of interceptors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6831) Miscellaneous refactoring changes of ContainScheduler

2017-07-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090832#comment-16090832
 ] 

Arun Suresh commented on YARN-6831:
---

Do take a look at YARN-6835 where i've posted an initial patch removing 
*runningContainers*

> Miscellaneous refactoring changes of ContainScheduler 
> --
>
> Key: YARN-6831
> URL: https://issues.apache.org/jira/browse/YARN-6831
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
> ContainerScheduler
> *Make ResourceUtilizationTracker pluggable. That way, we could use a 
> different tracker when oversubscription is enabled.
> *ContainerScheduler
>   ##Why do we need maxOppQueueLength given queuingLimit?
>   ##Is there value in splitting runningContainers into runningGuaranteed and 
> runningOpportunistic?
>   ##getOpportunisticContainersStatus method implementation feels awkward. How 
> about capturing the state in the field here, and have metrics etc. pull from 
> here?
>   ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
> *OpportunisticContainersStatus
>   ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
> we should rename current Used methods to Allocated?
>   ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
> that makes sense). Opport is neither short nor fully descriptive.
>   ##Have we considered folding ContainerQueuingLimit class into this?
> We decided to move the issues into this follow up jira to keep YARN-6706 
> moving forward to unblock oversubscription work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6835) Remove runningContainers from ContainerScheduler

2017-07-17 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6835:
--
Attachment: YARN-6835.001.patch

Attaching initial patch

> Remove runningContainers from ContainerScheduler
> 
>
> Key: YARN-6835
> URL: https://issues.apache.org/jira/browse/YARN-6835
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Minor
> Attachments: YARN-6835.001.patch
>
>
> The *runningContainers* collection contains both running containers as well 
> as container that are scheduled but not yet started.
> We can remove this data structure completely by introducing a *LAUNCHING* 
> container state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6835) Remove runningContainers from ContainerScheduler

2017-07-17 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6835:
--
Priority: Minor  (was: Major)

> Remove runningContainers from ContainerScheduler
> 
>
> Key: YARN-6835
> URL: https://issues.apache.org/jira/browse/YARN-6835
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>Priority: Minor
>
> The *runningContainers* collection contains both running containers as well 
> as container that are scheduled but not yet started.
> We can remove this data structure completely by introducing a *LAUNCHING* 
> container state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6835) Remove runningContainers from ContainerScheduler

2017-07-17 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-6835:
-

 Summary: Remove runningContainers from ContainerScheduler
 Key: YARN-6835
 URL: https://issues.apache.org/jira/browse/YARN-6835
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Arun Suresh
Assignee: Arun Suresh


The *runningContainers* collection contains both running containers as well as 
container that are scheduled but not yet started.

We can remove this data structure completely by introducing a *LAUNCHING* 
container state.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-17 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090796#comment-16090796
 ] 

Vrushali C commented on YARN-4455:
--

Yes, thanks for the patch [~varun_saxena]! 


bq. Primarily used to distinguish between metrics written in flow run table 
from different apps. So that we have 2 different puts for different apps and 
one does not overwrite metric of other just because the timestamps were same.
yes this is correct.

bq. Not really required for entity and app tables but multiplying it by a 
factor ensures that code path is common while writing.
I am wondering if this might be a concern for entity or application tables. 
When we multiply the timestamp by TimestampGenerator#TS_MULTIPLIER  , I am 
wondering if the timestamp meaning is changing. Also if it will roll over and 
may not mean the right thing. 

For example, if the metrics data was written with timestamp of today at 3pm, 
the multiplier will move it to another timestamp. 

I am reading through the unit tests to see how the metrics are being from hbase 
in the reader. I believe it's lines 1745 onwards. Did the number of metrics 
timeseries 13 come from the inserts in the prior tests? I am trying to get my 
head around how the modified timestamp is helping retrieve metrics. 



> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch, YARN-4455-YARN-5355.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6628) Unexpected jackson-core-2.2.3 dependency introduced

2017-07-17 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-6628:
--
Attachment: YARN-6628.3-branch-2.8.patch

> Unexpected jackson-core-2.2.3 dependency introduced
> ---
>
> Key: YARN-6628
> URL: https://issues.apache.org/jira/browse/YARN-6628
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: YARN-6628.1.patch, YARN-6628.2-branch-2.8.patch, 
> YARN-6628.3-branch-2.8.patch
>
>
> The change in YARN-5894 caused jackson-core-2.2.3.jar to be added in 
> share/hadoop/yarn/lib/. This added dependency seems to be incompatible with 
> jackson-core-asl-1.9.13.jar which is also shipped as a dependency.  This new 
> jackson-core jar ends up breaking jobs that ran fine on 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6777) Support for ApplicationMasterService processing chain of interceptors

2017-07-17 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6777?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090777#comment-16090777
 ] 

Subru Krishnan commented on YARN-6777:
--

Thanks [~asuresh] for splitting the patch, lot more understandable now.

I looked at the patch and have a few questions:
* In {{ApplicationMasterServiceInterceptor}}, I am not in favor of having 
_nextProcessor_ for each method as they are not method-dependent. Can we have a 
simple *setNextProcessor* instead? Additionally we should invoke it locally and 
then pass to the next interceptor.
* With the above changes: 
** {{ApplicationMasterServiceInterceptor}} can implement 
{{ApplicationMasterServiceProcessor}}.
** IIUC, the {{Processor}} is also redundant 
* In {{AMSProcessingChain}}, don't grok why we need both _head_ and _root_?
* Nit: can you please add more code comments for readability and class names 
are a bit confusing :).


> Support for ApplicationMasterService processing chain of interceptors
> -
>
> Key: YARN-6777
> URL: https://issues.apache.org/jira/browse/YARN-6777
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6777.001.patch, YARN-6777.002.patch, 
> YARN-6777.003.patch
>
>
> This JIRA extends the Processor introduced in YARN-6776 with a configurable 
> processing chain of interceptors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6798) NM startup failure with old state store due to version mismatch

2017-07-17 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090778#comment-16090778
 ] 

Ray Chiang commented on YARN-6798:
--

+1

I'm going to commit this tomorrow unless I hear otherwise.

> NM startup failure with old state store due to version mismatch
> ---
>
> Key: YARN-6798
> URL: https://issues.apache.org/jira/browse/YARN-6798
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Ray Chiang
>Assignee: Botong Huang
> Attachments: YARN-6798.v1.patch, YARN-6798.v2.patch
>
>
> YARN-6703 rolled back the state store version number for the RM from 2.0 to 
> 1.4.
> YARN-6127 bumped the version for the NM to 3.0
> private static final Version CURRENT_VERSION_INFO = 
> Version.newInstance(3, 0);
> YARN-5049 bumped the version for the NM to 2.0
> private static final Version CURRENT_VERSION_INFO = 
> Version.newInstance(2, 0);
> During an upgrade, all NMs died after upgrading a C6 cluster from alpha2 to 
> alpha4.
> {noformat}
> 2017-07-07 15:48:17,259 FATAL 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting 
> NodeManager
> org.apache.hadoop.service.ServiceStateException: java.io.IOException: 
> Incompatible version for NM state: expecting NM state version 3.0, but 
> loading version 2.0
> at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:307)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:748)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:809)
> Caused by: java.io.IOException: Incompatible version for NM state: expecting 
> NM state version 3.0, but loading version 2.0
> at 
> org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.checkVersion(NMLeveldbStateStoreService.java:1454)
> at 
> org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:1308)
> at 
> org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:307)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> ... 5 more
> 2017-07-07 15:48:17,277 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down NodeManager at xxx.gce.cloudera.com/aa.bb.cc.dd
> /
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-6831) Miscellaneous refactoring changes of ContainScheduler

2017-07-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090609#comment-16090609
 ] 

Arun Suresh edited comment on YARN-6831 at 7/17/17 10:33 PM:
-

Thanks for raising this [~haibochen] / [~kasha]. Some thoughts:

bq. Why do we need maxOppQueueLength given queuingLimit?
So, maxOppQueueLength is more like an *active* limit. The CS 
(ContainerScheduler) will not admit any more containers than that value. While 
the queuingLimit is more *reactive* and dynamically calculated by the RM and 
passed down to the NM in a HB response. The RM constantly calculates the 
mean/median of the queueLengths on all nodes and it tells the NM to shed 
containers from the queue if it is too high. I agree that the 
*maxOppQueueLength* can probably be removed though. But given your observation 
in YARN-6706 that test cases depends on this, my opinion is that we will keep 
it, and put a very high value by default - and mark it as VisibileForTesting 
only.

bq. Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic ?
Hmm… I was actually thinking of removing the *runningContainers* itself. It was 
introduced to keep track of all running containers (containers whose state is 
running) AND those that have been scheduled but not yet running. I think it may 
be better to encapsulate that as a proper container state, something like 
*SCHEDULED_TO_RUN* via a proper transition.
Adding more data structures might be problematic later on, since we can hit 
minor race conditions when transferring containers from runningGuaranteed to 
running Opportunistic (during promotion) and vice-versa (during demotion) if we 
are not careful about synchronization etc. Also, given the fact that a NM will 
not run more than say a couple of 100 containers, it might be better to just 
iterate over all the containers when the scheduler needs to make a decision.
Another problem with keeping a separate map is during NM recovery, we have to 
populate this specifically. we don’t do that for running containers now either 
– but I was thinking if we removed the *runningContainers* map, we wont have to 
(we already have a state called *QUEUED* in the NMStateStore which can be used 
to set the correct state in the recovered container)

bq. getOpportunisticContainersStatus method implementation feels awkward..
Kind of agree with you there, don’t recall exactly why we did it like that… 
think it was to not have to create a new instance of the status at every heart 
beat. 

bq. Have we considered folding ContainerQueuingLimit class into this
My first instinct is to keep it separate. Don’t think we should mix the Queuing 
aspect of the Container Scheduler with the ExecutionType aspect. Also, one is 
part of the NM heartbeat request and the other comes back as response.



was (Author: asuresh):
Thanks for raising this [~haibochen] / [~kasha]. Some thoughts:

bq. Why do we need maxOppQueueLength given queuingLimit?
So, maxOppQueueLength is more like an *active* limit. The CS 
(ContainerScheduler) will not admit any more containers than that value. While 
the queuingLimit is more *reactive* and dynamically calculated by the RM and 
passed down to the NM in a HB response. The RM constantly calculates the 
mean/median of the queueLengths on all nodes and it tells the NM to shed 
containers from the queue if it is too high. I agree that the 
*maxOppQueueLength* can probably be removed though. But given your observation 
in YARN-6706 that test cases depends on this, my opinion is that we will keep 
it, and put a very high value by default - and mark it as VisibileForTesting 
only.

bq. Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic ?
Hmm… I was actually thinking of removing the *runningContainers* itself. It was 
introduced to keep track of all running containers (containers whose state is 
running) AND those that have been scheduled but not yet running. I think it may 
be better to encapsulate that as a proper container state, something like 
*SCHEDULED_TO_RUN* via a proper transition.
Adding more data structures might be problematic later on, since we can hit 
minor race conditions when transferring containers from runningGuaranteed to 
running Opportunistic (during promotion) and vice-versa (during demotion) if we 
are not careful about synchronization etc. Also, given the fact that a NM will 
not run more than say a couple of 100 containers, it might be better to just 
iterate over all the containers when the scheduler needs to make a decision.
Another problem with keeping a separate map is during NM recovery, we have to 
populate this specifically. we don’t do that for running containers now either 
– but I was think if we removed the *runningContainers* map, we wont have to 
(we already have a state called *QUEUED* in the NMStateStore which can be used 
to 

[jira] [Updated] (YARN-6834) A container request with only racks specified and relax locality set to false is never honoured

2017-07-17 Thread Muhammad Samir Khan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Muhammad Samir Khan updated YARN-6834:
--
Attachment: yarn-6834-unittest.patch

> A container request with only racks specified and relax locality set to false 
> is never honoured
> ---
>
> Key: YARN-6834
> URL: https://issues.apache.org/jira/browse/YARN-6834
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Reporter: Muhammad Samir Khan
> Attachments: yarn-6834-unittest.patch
>
>
> A patch for a unit test is attached to reproduce the issue. It creates a 
> container request with only racks specified (nodes=null) and relax locality 
> set to false. With the node-locality-delay conf set appropriately, we wait 
> indefinitely for a container allocation and the test will timeout.
> My understanding of what causes this issue is as follows. The 
> RegularContainerAllocator delays a rack local allocation based on the 
> node-locality-delay parameter. This delay is based on missed opportunities. 
> However, the corresponding off-switch request is skipped but does not count 
> towards a missed opportunity (because relax locality is set to false). So the 
> allocator waits indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6834) A container request with only racks specified and relax locality set to false is never honoured

2017-07-17 Thread Muhammad Samir Khan (JIRA)
Muhammad Samir Khan created YARN-6834:
-

 Summary: A container request with only racks specified and relax 
locality set to false is never honoured
 Key: YARN-6834
 URL: https://issues.apache.org/jira/browse/YARN-6834
 Project: Hadoop YARN
  Issue Type: Bug
  Components: capacity scheduler
Reporter: Muhammad Samir Khan


A patch for a unit test is attached to reproduce the issue. It creates a 
container request with only racks specified (nodes=null) and relax locality set 
to false. With the node-locality-delay conf set appropriately, we wait 
indefinitely for a container allocation and the test will timeout.

My understanding of what causes this issue is as follows. The 
RegularContainerAllocator delays a rack local allocation based on the 
node-locality-delay parameter. This delay is based on missed opportunities. 
However, the corresponding off-switch request is skipped but does not count 
towards a missed opportunity (because relax locality is set to false). So the 
allocator waits indefinitely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6612) Update fair scheduler policies to be aware of resource types

2017-07-17 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6612:
---
Attachment: YARN-6612.YARN-3926.001.patch

Here's an initial patch.  Comments welcome.

> Update fair scheduler policies to be aware of resource types
> 
>
> Key: YARN-6612
> URL: https://issues.apache.org/jira/browse/YARN-6612
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: YARN-3926
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: YARN-6612.YARN-3926.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6706) Refactor ContainerScheduler to make oversubscription change easier

2017-07-17 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090625#comment-16090625
 ] 

Hudson commented on YARN-6706:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12022 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12022/])
YARN-6706. Refactor ContainerScheduler to make oversubscription change (arun 
suresh: rev 5b007921cdf01ecc8ed97c164b7d327b8304c529)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/ContainerScheduler.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/TestContainerManagerRecovery.java
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/scheduler/TestContainerSchedulerQueuing.java


> Refactor ContainerScheduler to make oversubscription change easier
> --
>
> Key: YARN-6706
> URL: https://issues.apache.org/jira/browse/YARN-6706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6706.01.patch, YARN-6706-YARN-1011.00.patch, 
> YARN-6706-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6831) Miscellaneous refactoring changes of ContainScheduler

2017-07-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090609#comment-16090609
 ] 

Arun Suresh commented on YARN-6831:
---

Thanks for raising this [~haibochen] / [~kasha]. Some thoughts:

bq. Why do we need maxOppQueueLength given queuingLimit?
So, maxOppQueueLength is more like an *active* limit. The CS 
(ContainerScheduler) will not admit any more containers than that value. While 
the queuingLimit is more *reactive* and dynamically calculated by the RM and 
passed down to the NM in a HB response. The RM constantly calculates the 
mean/median of the queueLengths on all nodes and it tells the NM to shed 
containers from the queue if it is too high. I agree that the 
*maxOppQueueLength* can probably be removed though. But given your observation 
in YARN-6706 that test cases depends on this, my opinion is that we will keep 
it, and put a very high value by default - and mark it as VisibileForTesting 
only.

bq. Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic ?
Hmm… I was actually thinking of removing the *runningContainers* itself. It was 
introduced to keep track of all running containers (containers whose state is 
running) AND those that have been scheduled but not yet running. I think it may 
be better to encapsulate that as a proper container state, something like 
*SCHEDULED_TO_RUN* via a proper transition.
Adding more data structures might be problematic later on, since we can hit 
minor race conditions when transferring containers from runningGuaranteed to 
running Opportunistic (during promotion) and vice-versa (during demotion) if we 
are not careful about synchronization etc. Also, given the fact that a NM will 
not run more than say a couple of 100 containers, it might be better to just 
iterate over all the containers when the scheduler needs to make a decision.
Another problem with keeping a separate map is during NM recovery, we have to 
populate this specifically. we don’t do that for running containers now either 
– but I was think if we removed the *runningContainers* map, we wont have to 
(we already have a state called *QUEUED* in the NMStateStore which can be used 
to set the correct state in the recovered container)

bq. getOpportunisticContainersStatus method implementation feels awkward..
Kind of agree with you there, don’t recall exactly why we did it like that… 
think it was to not have to create a new instance of the status at every heart 
beat. 

bq. Have we considered folding ContainerQueuingLimit class into this
My first instinct is to keep it separate. Don’t think we should mix the Queuing 
aspect of the Container Scheduler with the ExecutionType aspect. Also, one is 
part of the NM heartbeat request and the other comes back as response.


> Miscellaneous refactoring changes of ContainScheduler 
> --
>
> Key: YARN-6831
> URL: https://issues.apache.org/jira/browse/YARN-6831
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
> ContainerScheduler
> *Make ResourceUtilizationTracker pluggable. That way, we could use a 
> different tracker when oversubscription is enabled.
> *ContainerScheduler
>   ##Why do we need maxOppQueueLength given queuingLimit?
>   ##Is there value in splitting runningContainers into runningGuaranteed and 
> runningOpportunistic?
>   ##getOpportunisticContainersStatus method implementation feels awkward. How 
> about capturing the state in the field here, and have metrics etc. pull from 
> here?
>   ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
> *OpportunisticContainersStatus
>   ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
> we should rename current Used methods to Allocated?
>   ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
> that makes sense). Opport is neither short nor fully descriptive.
>   ##Have we considered folding ContainerQueuingLimit class into this?
> We decided to move the issues into this follow up jira to keep YARN-6706 
> moving forward to unblock oversubscription work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-6706) Refactor ContainerScheduler to make oversubscription change easier

2017-07-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6706:
-
Comment: was deleted

(was: Can you also cherry-pick this into YARN-1011 branch?)

> Refactor ContainerScheduler to make oversubscription change easier
> --
>
> Key: YARN-6706
> URL: https://issues.apache.org/jira/browse/YARN-6706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6706.01.patch, YARN-6706-YARN-1011.00.patch, 
> YARN-6706-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6706) Refactor ContainerScheduler to make oversubscription change easier

2017-07-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090593#comment-16090593
 ] 

Haibo Chen commented on YARN-6706:
--

Can you also cherry-pick this into YARN-1011 branch?

> Refactor ContainerScheduler to make oversubscription change easier
> --
>
> Key: YARN-6706
> URL: https://issues.apache.org/jira/browse/YARN-6706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6706.01.patch, YARN-6706-YARN-1011.00.patch, 
> YARN-6706-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6628) Unexpected jackson-core-2.2.3 dependency introduced

2017-07-17 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated YARN-6628:
--
Attachment: YARN-6628.2-branch-2.8.patch

> Unexpected jackson-core-2.2.3 dependency introduced
> ---
>
> Key: YARN-6628
> URL: https://issues.apache.org/jira/browse/YARN-6628
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.1
>Reporter: Jason Lowe
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: YARN-6628.1.patch, YARN-6628.2-branch-2.8.patch
>
>
> The change in YARN-5894 caused jackson-core-2.2.3.jar to be added in 
> share/hadoop/yarn/lib/. This added dependency seems to be incompatible with 
> jackson-core-asl-1.9.13.jar which is also shipped as a dependency.  This new 
> jackson-core jar ends up breaking jobs that ran fine on 2.8.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6833) On branch-2 ResourceManager failed to start

2017-07-17 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-6833:
-
Priority: Blocker  (was: Major)

> On branch-2 ResourceManager failed to start
> ---
>
> Key: YARN-6833
> URL: https://issues.apache.org/jira/browse/YARN-6833
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9
>Reporter: Junping Du
>Priority: Blocker
>
> On build against branch-2, ResourceManager get failed to start because of 
> following failures:
> {noformat}
> 2017-07-16 23:33:15,688 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer.setMonitorInterval(I)V
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer.serviceInit(ContainerAllocationExpirer.java:44)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:684)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1005)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:285)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1283)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6706) Refactor ContainerScheduler to make oversubscription change easier

2017-07-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090587#comment-16090587
 ] 

Haibo Chen commented on YARN-6706:
--

Thanks [~asuresh] for the reviews and commit.

> Refactor ContainerScheduler to make oversubscription change easier
> --
>
> Key: YARN-6706
> URL: https://issues.apache.org/jira/browse/YARN-6706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6706.01.patch, YARN-6706-YARN-1011.00.patch, 
> YARN-6706-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Moved] (YARN-6833) On branch-2 ResourceManager failed to start

2017-07-17 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du moved MAPREDUCE-6915 to YARN-6833:
-

Affects Version/s: (was: 2.9)
   2.9
 Target Version/s:   (was: 2.9.0)
  Component/s: (was: resourcemanager)
   resourcemanager
  Key: YARN-6833  (was: MAPREDUCE-6915)
  Project: Hadoop YARN  (was: Hadoop Map/Reduce)

> On branch-2 ResourceManager failed to start
> ---
>
> Key: YARN-6833
> URL: https://issues.apache.org/jira/browse/YARN-6833
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.9
>Reporter: Junping Du
>
> On build against branch-2, ResourceManager get failed to start because of 
> following failures:
> {noformat}
> 2017-07-16 23:33:15,688 FATAL 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
> ResourceManager
> java.lang.NoSuchMethodError: 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer.setMonitorInterval(I)V
> at 
> org.apache.hadoop.yarn.server.resourcemanager.rmcontainer.ContainerAllocationExpirer.serviceInit(ContainerAllocationExpirer.java:44)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:684)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1005)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:285)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1283)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6706) Refactor ContainerScheduler to make oversubscription change easier

2017-07-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090580#comment-16090580
 ] 

Arun Suresh commented on YARN-6706:
---

[~kkaranasos].. Apologies, but think you send the message just as I was 
committing !!
As [~haibochen] mentioned, maybe you can post comments to YARN-6831 and we can 
incorporate the changes there..

> Refactor ContainerScheduler to make oversubscription change easier
> --
>
> Key: YARN-6706
> URL: https://issues.apache.org/jira/browse/YARN-6706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6706.01.patch, YARN-6706-YARN-1011.00.patch, 
> YARN-6706-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6832) Tests use assertTrue(....equals(...)) instead of assertEquals()

2017-07-17 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090578#comment-16090578
 ] 

Daniel Templeton commented on YARN-6832:


Findbugs issues are unrelated.

> Tests use assertTrue(equals(...)) instead of assertEquals()
> ---
>
> Key: YARN-6832
> URL: https://issues.apache.org/jira/browse/YARN-6832
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.8.1, 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: YARN-6832.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6832) Tests use assertTrue(....equals(...)) instead of assertEquals()

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090575#comment-16090575
 ] 

Hadoop QA commented on YARN-6832:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
45s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 487 unchanged - 6 fixed = 487 total (was 493) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
33s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 
17s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 44m 
39s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
11s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
58s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}144m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (YARN-3254) HealthReport should include disk full information

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090574#comment-16090574
 ] 

Hadoop QA commented on YARN-3254:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 18s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 3 new + 35 unchanged - 0 fixed = 38 total (was 35) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
53s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-3254 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877669/YARN-3254-003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d361379b9de5 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0e78ae |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16471/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16471/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16471/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16471/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   

[jira] [Commented] (YARN-6706) Refactor ContainerScheduler to make oversubscription change easier

2017-07-17 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090573#comment-16090573
 ] 

Haibo Chen commented on YARN-6706:
--

Thanks [~kkaranasos] for your coming review! FYI, I have created YARN-6831 to 
address Karthik's comments since it is not totally necessary to have them in 
this jira.

> Refactor ContainerScheduler to make oversubscription change easier
> --
>
> Key: YARN-6706
> URL: https://issues.apache.org/jira/browse/YARN-6706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6706.01.patch, YARN-6706-YARN-1011.00.patch, 
> YARN-6706-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6831) Miscellaneous refactoring changes of ContainScheduler

2017-07-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6831:
-
Description: 
While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
ContainerScheduler

*Make ResourceUtilizationTracker pluggable. That way, we could use a different 
tracker when oversubscription is enabled.

*ContainerScheduler
  ##Why do we need maxOppQueueLength given queuingLimit?
  ##Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic?
  ##getOpportunisticContainersStatus method implementation feels awkward. How 
about capturing the state in the field here, and have metrics etc. pull from 
here?
  ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary

#OpportunisticContainersStatus
  ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
we should rename current Used methods to Allocated?
  ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
that makes sense). Opport is neither short nor fully descriptive.
  ##Have we considered folding ContainerQueuingLimit class into this?

We decided to move the issues into this follow up jira to keep YARN-6706 moving 
forward to unblock oversubscription work.

  was:
While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
ContainerScheduler

#Make ResourceUtilizationTracker pluggable. That way, we could use a different 
tracker when oversubscription is enabled.

#ContainerScheduler
  ##Why do we need maxOppQueueLength given queuingLimit?
  ##Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic?
  ##getOpportunisticContainersStatus method implementation feels awkward. How 
about capturing the state in the field here, and have metrics etc. pull from 
here?
  ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary

#OpportunisticContainersStatus
  ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
we should rename current Used methods to Allocated?
  ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
that makes sense). Opport is neither short nor fully descriptive.
  ##Have we considered folding ContainerQueuingLimit class into this?

We decided to move the issues into this follow up jira to keep YARN-6706 moving 
forward to unblock oversubscription work.


> Miscellaneous refactoring changes of ContainScheduler 
> --
>
> Key: YARN-6831
> URL: https://issues.apache.org/jira/browse/YARN-6831
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
> ContainerScheduler
> *Make ResourceUtilizationTracker pluggable. That way, we could use a 
> different tracker when oversubscription is enabled.
> *ContainerScheduler
>   ##Why do we need maxOppQueueLength given queuingLimit?
>   ##Is there value in splitting runningContainers into runningGuaranteed and 
> runningOpportunistic?
>   ##getOpportunisticContainersStatus method implementation feels awkward. How 
> about capturing the state in the field here, and have metrics etc. pull from 
> here?
>   ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
> #OpportunisticContainersStatus
>   ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
> we should rename current Used methods to Allocated?
>   ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
> that makes sense). Opport is neither short nor fully descriptive.
>   ##Have we considered folding ContainerQueuingLimit class into this?
> We decided to move the issues into this follow up jira to keep YARN-6706 
> moving forward to unblock oversubscription work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6831) Miscellaneous refactoring changes of ContainScheduler

2017-07-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6831:
-
Description: 
While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
ContainerScheduler

*Make ResourceUtilizationTracker pluggable. That way, we could use a different 
tracker when oversubscription is enabled.

*ContainerScheduler
  ##Why do we need maxOppQueueLength given queuingLimit?
  ##Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic?
  ##getOpportunisticContainersStatus method implementation feels awkward. How 
about capturing the state in the field here, and have metrics etc. pull from 
here?
  ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary

*OpportunisticContainersStatus
  ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
we should rename current Used methods to Allocated?
  ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
that makes sense). Opport is neither short nor fully descriptive.
  ##Have we considered folding ContainerQueuingLimit class into this?

We decided to move the issues into this follow up jira to keep YARN-6706 moving 
forward to unblock oversubscription work.

  was:
While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
ContainerScheduler

*Make ResourceUtilizationTracker pluggable. That way, we could use a different 
tracker when oversubscription is enabled.

*ContainerScheduler
  ##Why do we need maxOppQueueLength given queuingLimit?
  ##Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic?
  ##getOpportunisticContainersStatus method implementation feels awkward. How 
about capturing the state in the field here, and have metrics etc. pull from 
here?
  ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary

#OpportunisticContainersStatus
  ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
we should rename current Used methods to Allocated?
  ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
that makes sense). Opport is neither short nor fully descriptive.
  ##Have we considered folding ContainerQueuingLimit class into this?

We decided to move the issues into this follow up jira to keep YARN-6706 moving 
forward to unblock oversubscription work.


> Miscellaneous refactoring changes of ContainScheduler 
> --
>
> Key: YARN-6831
> URL: https://issues.apache.org/jira/browse/YARN-6831
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
> ContainerScheduler
> *Make ResourceUtilizationTracker pluggable. That way, we could use a 
> different tracker when oversubscription is enabled.
> *ContainerScheduler
>   ##Why do we need maxOppQueueLength given queuingLimit?
>   ##Is there value in splitting runningContainers into runningGuaranteed and 
> runningOpportunistic?
>   ##getOpportunisticContainersStatus method implementation feels awkward. How 
> about capturing the state in the field here, and have metrics etc. pull from 
> here?
>   ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
> *OpportunisticContainersStatus
>   ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
> we should rename current Used methods to Allocated?
>   ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
> that makes sense). Opport is neither short nor fully descriptive.
>   ##Have we considered folding ContainerQueuingLimit class into this?
> We decided to move the issues into this follow up jira to keep YARN-6706 
> moving forward to unblock oversubscription work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6831) Miscellaneous refactoring changes of ContainScheduler

2017-07-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6831:
-
Description: 
While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
ContainerScheduler

#Make ResourceUtilizationTracker pluggable. That way, we could use a different 
tracker when oversubscription is enabled.

#ContainerScheduler
  ##Why do we need maxOppQueueLength given queuingLimit?
  ##Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic?
  ##getOpportunisticContainersStatus method implementation feels awkward. How 
about capturing the state in the field here, and have metrics etc. pull from 
here?
  ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary

#OpportunisticContainersStatus
  ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
we should rename current Used methods to Allocated?
  ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
that makes sense). Opport is neither short nor fully descriptive.
  ##Have we considered folding ContainerQueuingLimit class into this?

We decided to move the issues into this follow up jira to keep YARN-6706 moving 
forward to unblock oversubscription work.

  was:
While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
ContainerScheduler

#Make ResourceUtilizationTracker pluggable. That way, we could use a different 
tracker when oversubscription is enabled.
#ContainerScheduler
##Why do we need maxOppQueueLength given queuingLimit?
##Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic?
##getOpportunisticContainersStatus method implementation feels awkward. How 
about capturing the state in the field here, and have metrics etc. pull from 
here?
##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
#OpportunisticContainersStatus
##Let us clearly differentiate between allocated, used and utilized. Maybe, we 
should rename current Used methods to Allocated?
##I prefer either full name Opportunistic (in method) or Opp (shortest name 
that makes sense). Opport is neither short nor fully descriptive.
##Have we considered folding ContainerQueuingLimit class into this?

We decided to move the issues into this follow up jira to keep YARN-6706 moving 
forward to unblock oversubscription work.


> Miscellaneous refactoring changes of ContainScheduler 
> --
>
> Key: YARN-6831
> URL: https://issues.apache.org/jira/browse/YARN-6831
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
> ContainerScheduler
> #Make ResourceUtilizationTracker pluggable. That way, we could use a 
> different tracker when oversubscription is enabled.
> #ContainerScheduler
>   ##Why do we need maxOppQueueLength given queuingLimit?
>   ##Is there value in splitting runningContainers into runningGuaranteed and 
> runningOpportunistic?
>   ##getOpportunisticContainersStatus method implementation feels awkward. How 
> about capturing the state in the field here, and have metrics etc. pull from 
> here?
>   ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
> #OpportunisticContainersStatus
>   ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
> we should rename current Used methods to Allocated?
>   ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
> that makes sense). Opport is neither short nor fully descriptive.
>   ##Have we considered folding ContainerQueuingLimit class into this?
> We decided to move the issues into this follow up jira to keep YARN-6706 
> moving forward to unblock oversubscription work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6831) Miscellaneous refactoring changes of ContainScheduler

2017-07-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6831:
-
Description: 
While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
ContainerScheduler

#Make ResourceUtilizationTracker pluggable. That way, we could use a different 
tracker when oversubscription is enabled.
#ContainerScheduler
##Why do we need maxOppQueueLength given queuingLimit?
##Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic?
##getOpportunisticContainersStatus method implementation feels awkward. How 
about capturing the state in the field here, and have metrics etc. pull from 
here?
##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
#OpportunisticContainersStatus
##Let us clearly differentiate between allocated, used and utilized. Maybe, we 
should rename current Used methods to Allocated?
##I prefer either full name Opportunistic (in method) or Opp (shortest name 
that makes sense). Opport is neither short nor fully descriptive.
##Have we considered folding ContainerQueuingLimit class into this?

We decided to move the issues into this follow up jira to keep YARN-6706 moving 
forward to unblock oversubscription work.

  was:
While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
ContainerScheduler

"
#Make ResourceUtilizationTracker pluggable. That way, we could use a different 
tracker when oversubscription is enabled.
#ContainerScheduler
##Why do we need maxOppQueueLength given queuingLimit?
##Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic?
##getOpportunisticContainersStatus method implementation feels awkward. How 
about capturing the state in the field here, and have metrics etc. pull from 
here?
##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
#OpportunisticContainersStatus
##Let us clearly differentiate between allocated, used and utilized. Maybe, we 
should rename current Used methods to Allocated?
##I prefer either full name Opportunistic (in method) or Opp (shortest name 
that makes sense). Opport is neither short nor fully descriptive.
##Have we considered folding ContainerQueuingLimit class into this?
"

We decided to move the issues into this follow up jira to keep YARN-6706 moving 
forward to unblock oversubscription work.


> Miscellaneous refactoring changes of ContainScheduler 
> --
>
> Key: YARN-6831
> URL: https://issues.apache.org/jira/browse/YARN-6831
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
> ContainerScheduler
> #Make ResourceUtilizationTracker pluggable. That way, we could use a 
> different tracker when oversubscription is enabled.
> #ContainerScheduler
> ##Why do we need maxOppQueueLength given queuingLimit?
> ##Is there value in splitting runningContainers into runningGuaranteed and 
> runningOpportunistic?
> ##getOpportunisticContainersStatus method implementation feels awkward. How 
> about capturing the state in the field here, and have metrics etc. pull from 
> here?
> ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
> #OpportunisticContainersStatus
> ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
> we should rename current Used methods to Allocated?
> ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
> that makes sense). Opport is neither short nor fully descriptive.
> ##Have we considered folding ContainerQueuingLimit class into this?
> We decided to move the issues into this follow up jira to keep YARN-6706 
> moving forward to unblock oversubscription work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6831) Miscellaneous refactoring changes of ContainScheduler

2017-07-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6831:
-
Description: 
While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
ContainerScheduler

"
#Make ResourceUtilizationTracker pluggable. That way, we could use a different 
tracker when oversubscription is enabled.
#ContainerScheduler
##Why do we need maxOppQueueLength given queuingLimit?
##Is there value in splitting runningContainers into runningGuaranteed and 
runningOpportunistic?
##getOpportunisticContainersStatus method implementation feels awkward. How 
about capturing the state in the field here, and have metrics etc. pull from 
here?
##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
#OpportunisticContainersStatus
##Let us clearly differentiate between allocated, used and utilized. Maybe, we 
should rename current Used methods to Allocated?
##I prefer either full name Opportunistic (in method) or Opp (shortest name 
that makes sense). Opport is neither short nor fully descriptive.
##Have we considered folding ContainerQueuingLimit class into this?
"

We decided to move the issues into this follow up jira to keep YARN-6706 moving 
forward to unblock oversubscription work.

> Miscellaneous refactoring changes of ContainScheduler 
> --
>
> Key: YARN-6831
> URL: https://issues.apache.org/jira/browse/YARN-6831
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>
> While reviewing YARN-6706, Karthik pointed out a few issues for improvment in 
> ContainerScheduler
> "
> #Make ResourceUtilizationTracker pluggable. That way, we could use a 
> different tracker when oversubscription is enabled.
> #ContainerScheduler
> ##Why do we need maxOppQueueLength given queuingLimit?
> ##Is there value in splitting runningContainers into runningGuaranteed and 
> runningOpportunistic?
> ##getOpportunisticContainersStatus method implementation feels awkward. How 
> about capturing the state in the field here, and have metrics etc. pull from 
> here?
> ##startContainersFromQueue: Local variable resourcesAvailable is unnecessary
> #OpportunisticContainersStatus
> ##Let us clearly differentiate between allocated, used and utilized. Maybe, 
> we should rename current Used methods to Allocated?
> ##I prefer either full name Opportunistic (in method) or Opp (shortest name 
> that makes sense). Opport is neither short nor fully descriptive.
> ##Have we considered folding ContainerQueuingLimit class into this?
> "
> We decided to move the issues into this follow up jira to keep YARN-6706 
> moving forward to unblock oversubscription work.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6706) Refactor ContainerScheduler to make oversubscription change easier

2017-07-17 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090555#comment-16090555
 ] 

Konstantinos Karanasos commented on YARN-6706:
--

Hi guys, I am back. If it is possible, please wait one more day so that I
can give it a look as well. Thanks!


-- 
Konstantinos


> Refactor ContainerScheduler to make oversubscription change easier
> --
>
> Key: YARN-6706
> URL: https://issues.apache.org/jira/browse/YARN-6706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6706.01.patch, YARN-6706-YARN-1011.00.patch, 
> YARN-6706-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6706) Refactor ContainerScheduler to make oversubscription change easier

2017-07-17 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090544#comment-16090544
 ] 

Arun Suresh commented on YARN-6706:
---

Thanks Haibo..
+1 for the latest patch.. will check it in shortly

> Refactor ContainerScheduler to make oversubscription change easier
> --
>
> Key: YARN-6706
> URL: https://issues.apache.org/jira/browse/YARN-6706
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-alpha3
>Reporter: Haibo Chen
>Assignee: Haibo Chen
> Attachments: YARN-6706.01.patch, YARN-6706-YARN-1011.00.patch, 
> YARN-6706-YARN-1011.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5412) Create a proxy chain for ResourceManager REST API in the Router

2017-07-17 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090538#comment-16090538
 ] 

Carlo Curino commented on YARN-5412:



[~giovanni.fumarola], thanks for addressing the feedback. Here some more, based 
on your latest patch.

# {{hadoop-yarn-server-router/pom.xml}} can you validate the list you add is 
minimal and as narrowly scoped as possible?
# I think that in {{DefaultRequestInterceptorREST}} the implementations of 
{{getNodes}}, {{getLabelsToNodes}}, {{replaceLabelsOnNode}} don't forward all 
the inputs correctly 
# Related to the above, your tests should be tighter and attempt to catch these 
things
# Add comments for the one not-needed as part of HSR
# Why the {{DefaultRequestInterceptorREST}} does not implement {{getAppAttempt, 
getContainers, getContainere}}?
# {{RouterWebApp}} what happens if the router is not initialized?  
# {{RouterWebServiceUtil (89}} shouldn't you throw an exception in this case?
# {{RouterWebServiceUtil (105}} is 200 the only "correct" code ever returned? 
(What about 201/202/204...) Maybe just check is in 2XX series.
# {{RouterWebServices.init()}} the init name is a bit misleading, as it looks 
more like a "clearResponse". Also is it enough to set content type to null, to 
clear?
# {{TestRouterWebServicesREST}} seem to still have a large amount of redundancy 
in the code, the same "generics" tricks you played in 
{{DefaultRequestInterceptorREST}} should apply here as well.
# {{TestRouterWebServicesREST}} seem to mostly/only? test correct paths. It 
would be good to verify also wrong/null inputs and wrong-paths, validating the 
system catches and throws correclty in those cases. 
# {{TestRouterWebServicesREST}} it should be easy to make this test parametric, 
and have it run with both .jsom an .xml formats (broaden the coverage with 
little work).

# Please comment on / address the Yetus issues.

> Create a proxy chain for ResourceManager REST API in the Router
> ---
>
> Key: YARN-5412
> URL: https://issues.apache.org/jira/browse/YARN-5412
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5412-YARN-2915.1.patch, 
> YARN-5412-YARN-2915.2.patch, YARN-5412-YARN-2915.3.patch, 
> YARN-5412-YARN-2915.4.patch, YARN-5412-YARN-2915.proto.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a proxy for ResourceManager REST API in the 
> Router. This provides a placeholder for:
> 1) throttling mis-behaving clients (YARN-1546)
> 3) mask the access to multiple RMs (YARN-3659)
> We are planning to follow the interceptor pattern like we did in YARN-2884 to 
> generalize the approach and have only dynamically coupling for Federation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3254) HealthReport should include disk full information

2017-07-17 Thread Suma Shivaprasad (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090508#comment-16090508
 ] 

Suma Shivaprasad commented on YARN-3254:


Attached a patch which bifurcates disks health failure reports into failed vs 
errored disks, which is already available as part of existing disk health checks

> HealthReport should include disk full information
> -
>
> Key: YARN-3254
> URL: https://issues.apache.org/jira/browse/YARN-3254
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Akira Ajisaka
>Assignee: Suma Shivaprasad
> Attachments: Screen Shot 2015-02-24 at 17.57.39.png, Screen Shot 
> 2015-02-25 at 14.38.10.png, YARN-3254-001.patch, YARN-3254-002.patch, 
> YARN-3254-003.patch
>
>
> When a NodeManager's local disk gets almost full, the NodeManager sends a 
> health report to ResourceManager that "local/log dir is bad" and the message 
> is displayed on ResourceManager Web UI. It's difficult for users to detect 
> why the dir is bad.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-3254) HealthReport should include disk full information

2017-07-17 Thread Suma Shivaprasad (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suma Shivaprasad updated YARN-3254:
---
Attachment: YARN-3254-003.patch

> HealthReport should include disk full information
> -
>
> Key: YARN-3254
> URL: https://issues.apache.org/jira/browse/YARN-3254
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.6.0
>Reporter: Akira Ajisaka
>Assignee: Suma Shivaprasad
> Attachments: Screen Shot 2015-02-24 at 17.57.39.png, Screen Shot 
> 2015-02-25 at 14.38.10.png, YARN-3254-001.patch, YARN-3254-002.patch, 
> YARN-3254-003.patch
>
>
> When a NodeManager's local disk gets almost full, the NodeManager sends a 
> health report to ResourceManager that "local/log dir is bad" and the message 
> is displayed on ResourceManager Web UI. It's difficult for users to detect 
> why the dir is bad.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090475#comment-16090475
 ] 

Hadoop QA commented on YARN-6130:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
29s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
4s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m  
4s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
48s{color} | {color:red} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app in 
YARN-5355 has 3 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
39s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client in 
YARN-5355 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
3s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-5355 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
50s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-5355 has 5 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
4s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-5355 has 8 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
14s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 46s{color} | {color:orange} root: The patch generated 9 new + 393 unchanged 
- 2 fixed = 402 total (was 395) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  8m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} 

[jira] [Updated] (YARN-6818) User limit per partition is not honored in branch-2.7 >=

2017-07-17 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated YARN-6818:
--
Labels:   (was: release-blocker)

> User limit per partition is not honored in branch-2.7 >=
> 
>
> Key: YARN-6818
> URL: https://issues.apache.org/jira/browse/YARN-6818
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
> Fix For: 2.7.4
>
> Attachments: YARN-6818-branch-2.7.001.patch, 
> YARN-6818-branch-2.7.002.patch
>
>
> We are seeing an issue where user limit factor does not cap the amount of 
> resources a user can consume in a queue in a partition. Suppose you have a 
> queue with access to partition X, used resources in default partition is 0, 
> and used resources in partition X is at the partition's user limit. This is 
> the problematic code as far as I can tell: (in LeafQueue.java){noformat}
> if (Resources
> .greaterThan(resourceCalculator, clusterResource,
> user.getUsed(label),
> limit)) {
>   // if enabled, check to see if could we potentially use this node 
> instead
>   // of a reserved node if the application has reserved containers
>   if (this.reservationsContinueLooking) {
> if (Resources.lessThanOrEqual(
> resourceCalculator,
> clusterResource,
> Resources.subtract(user.getUsed(), 
> application.getCurrentReservation()),
> limit)) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit based on reservations - " + " consumed: 
> "
> + user.getUsed() + " reserved: "
> + application.getCurrentReservation() + " limit: " + limit);
>   }
>   Resource amountNeededToUnreserve = 
> Resources.subtract(user.getUsed(label), limit);
>   // we can only acquire a new container if we unreserve first since 
> we ignored the
>   // user limit. Choose the max of user limit or what was previously 
> set by max
>   // capacity.
>   
> currentResoureLimits.setAmountNeededUnreserve(Resources.max(resourceCalculator,
>   clusterResource, 
> currentResoureLimits.getAmountNeededUnreserve(),
>   amountNeededToUnreserve));
>   return true;
> }
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit - " + " consumed: "
> + user.getUsed() + " limit: " + limit);
>   }
>   return false;
> }
> {noformat}
> First it sees the used resources in partition X is greater than partition's 
> user limit. Then the reservation check also succeeds because it is checking 
> {{user.getUsed() - application.getCurrentReservation() <= limit}} and returns 
> true.
> One fix is to just set {{Resources.subtract(user.getUsed(), 
> application.getCurrentReservation())}} to 
> {{Resources.subtract(user.getUsed(label), 
> application.getCurrentReservation())}}.
> This doesn't seem to be a problem in branch-2.8 and higher since YARN-3356 
> introduces this check: {noformat}  if (this.reservationsContinueLooking 
> && checkReservations
>   && label.equals(CommonNodeLabelsManager.NO_LABEL)) {{noformat}
> so in this case getting the used resources in default partition seems to be 
> correct.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6826) SLS NMSimulator support for Opportunistic Container Queuing

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090465#comment-16090465
 ] 

Hadoop QA commented on YARN-6826:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-sls: The patch generated 7 
new + 22 unchanged - 1 fixed = 29 total (was 23) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hadoop-tools_hadoop-sls generated 2 new + 20 unchanged 
- 0 fixed = 22 total (was 20) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 51s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.sls.appmaster.TestAMSimulator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6826 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877653/YARN-6826.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 812080f6daa2 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0e78ae |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16470/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-sls.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/16470/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-sls.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16470/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-sls.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16470/testReport/ |
| modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16470/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> SLS NMSimulator support for Opportunistic Container Queuing
> 

[jira] [Updated] (YARN-6826) SLS NMSimulator support for Opportunistic Container Queuing

2017-07-17 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6826:
--
Attachment: YARN-6826.001.patch

Attaching initial patch

> SLS NMSimulator support for Opportunistic Container Queuing
> ---
>
> Key: YARN-6826
> URL: https://issues.apache.org/jira/browse/YARN-6826
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-6826.001.patch
>
>
> Allow the NMSimulator to simulate Opportunistic containers. This Essentially 
> means:
> # Start Opportunistic Containers if there are available resources on the node.
> # Queue OCs, if there arn't resources on the node.
> # Kill OCs if there is are no resources for an incoming Guaranteed containers.
> # Start Containers from the queue as soon as Containers complete / are killed.
> # Send Opportunistic Container status updates



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6826) SLS NMSimulator support for Opportunistic Container Queuing

2017-07-17 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-6826:
--
Description: 
Allow the NMSimulator to simulate Opportunistic containers. This Essentially 
means:
# Start Opportunistic Containers if there are available resources on the node.
# Queue OCs, if there arn't resources on the node.
# Kill OCs if there is are no resources for an incoming Guaranteed containers.
# Start Containers from the queue as soon as Containers complete / are killed.
# Send Opportunistic Container status updates

> SLS NMSimulator support for Opportunistic Container Queuing
> ---
>
> Key: YARN-6826
> URL: https://issues.apache.org/jira/browse/YARN-6826
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Allow the NMSimulator to simulate Opportunistic containers. This Essentially 
> means:
> # Start Opportunistic Containers if there are available resources on the node.
> # Queue OCs, if there arn't resources on the node.
> # Kill OCs if there is are no resources for an incoming Guaranteed containers.
> # Start Containers from the queue as soon as Containers complete / are killed.
> # Send Opportunistic Container status updates



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6792) Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo

2017-07-17 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090363#comment-16090363
 ] 

Subru Krishnan commented on YARN-6792:
--

Thanks [~sunilg] for reviewing/committing and [~giovanni.fumarola] for 
identifying and fixing the issue!

> Incorrect XML convertion in NodeIDsInfo and LabelsToNodesInfo
> -
>
> Key: YARN-6792
> URL: https://issues.apache.org/jira/browse/YARN-6792
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Blocker
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-6792.v1.patch, YARN-6792.v2.patch
>
>
> NodeIDsInfo contains a typo and there is a missing constructor in 
> LabelsToNodesInfo. These bugs does not allow a correct conversation in XML of 
>  LabelsToNodesInfo.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090317#comment-16090317
 ] 

Hadoop QA commented on YARN-5892:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.8 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
26s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
34s{color} | {color:green} branch-2.8 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-yarn in the patch failed with JDK v1.7.0_131. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 43s{color} 
| {color:red} hadoop-yarn in the patch failed with JDK v1.7.0_131. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 39s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 9 new + 286 unchanged - 1 fixed = 295 total (was 287) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
20s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdk1.8.0_131
 with JDK v1.8.0_131 generated 4 new + 973 unchanged - 0 fixed = 977 total (was 
973) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-yarn-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} unit 

[jira] [Updated] (YARN-6832) Tests use assertTrue(....equals(...)) instead of assertEquals()

2017-07-17 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated YARN-6832:
---
Attachment: YARN-6832.001.patch

> Tests use assertTrue(equals(...)) instead of assertEquals()
> ---
>
> Key: YARN-6832
> URL: https://issues.apache.org/jira/browse/YARN-6832
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.8.1, 3.0.0-alpha4
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Minor
> Attachments: YARN-6832.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6832) Tests use assertTrue(....equals(...)) instead of assertEquals()

2017-07-17 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-6832:
--

 Summary: Tests use assertTrue(equals(...)) instead of 
assertEquals()
 Key: YARN-6832
 URL: https://issues.apache.org/jira/browse/YARN-6832
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: test
Affects Versions: 3.0.0-alpha4, 2.8.1
Reporter: Daniel Templeton
Assignee: Daniel Templeton
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5731) Preemption calculation is not accurate when reserved containers are present in queue.

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090259#comment-16090259
 ] 

Hadoop QA commented on YARN-5731:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 6 new + 83 unchanged - 7 fixed = 89 total (was 90) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 39s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5731 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877632/YARN-5731.addendum.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ecab00194f10 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0e78ae |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16467/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16467/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16467/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 

[jira] [Commented] (YARN-6818) User limit per partition is not honored in branch-2.7 >=

2017-07-17 Thread Jonathan Hung (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090256#comment-16090256
 ] 

Jonathan Hung commented on YARN-6818:
-

Great, thanks [~shv] for commit and [~Naganarasimha]/[~sunilg] for reviews!

> User limit per partition is not honored in branch-2.7 >=
> 
>
> Key: YARN-6818
> URL: https://issues.apache.org/jira/browse/YARN-6818
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.7.4
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>  Labels: release-blocker
> Fix For: 2.7.4
>
> Attachments: YARN-6818-branch-2.7.001.patch, 
> YARN-6818-branch-2.7.002.patch
>
>
> We are seeing an issue where user limit factor does not cap the amount of 
> resources a user can consume in a queue in a partition. Suppose you have a 
> queue with access to partition X, used resources in default partition is 0, 
> and used resources in partition X is at the partition's user limit. This is 
> the problematic code as far as I can tell: (in LeafQueue.java){noformat}
> if (Resources
> .greaterThan(resourceCalculator, clusterResource,
> user.getUsed(label),
> limit)) {
>   // if enabled, check to see if could we potentially use this node 
> instead
>   // of a reserved node if the application has reserved containers
>   if (this.reservationsContinueLooking) {
> if (Resources.lessThanOrEqual(
> resourceCalculator,
> clusterResource,
> Resources.subtract(user.getUsed(), 
> application.getCurrentReservation()),
> limit)) {
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit based on reservations - " + " consumed: 
> "
> + user.getUsed() + " reserved: "
> + application.getCurrentReservation() + " limit: " + limit);
>   }
>   Resource amountNeededToUnreserve = 
> Resources.subtract(user.getUsed(label), limit);
>   // we can only acquire a new container if we unreserve first since 
> we ignored the
>   // user limit. Choose the max of user limit or what was previously 
> set by max
>   // capacity.
>   
> currentResoureLimits.setAmountNeededUnreserve(Resources.max(resourceCalculator,
>   clusterResource, 
> currentResoureLimits.getAmountNeededUnreserve(),
>   amountNeededToUnreserve));
>   return true;
> }
>   }
>   if (LOG.isDebugEnabled()) {
> LOG.debug("User " + userName + " in queue " + getQueueName()
> + " will exceed limit - " + " consumed: "
> + user.getUsed() + " limit: " + limit);
>   }
>   return false;
> }
> {noformat}
> First it sees the used resources in partition X is greater than partition's 
> user limit. Then the reservation check also succeeds because it is checking 
> {{user.getUsed() - application.getCurrentReservation() <= limit}} and returns 
> true.
> One fix is to just set {{Resources.subtract(user.getUsed(), 
> application.getCurrentReservation())}} to 
> {{Resources.subtract(user.getUsed(label), 
> application.getCurrentReservation())}}.
> This doesn't seem to be a problem in branch-2.8 and higher since YARN-3356 
> introduces this check: {noformat}  if (this.reservationsContinueLooking 
> && checkReservations
>   && label.equals(CommonNodeLabelsManager.NO_LABEL)) {{noformat}
> so in this case getting the used resources in default partition seems to be 
> correct.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-17 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090257#comment-16090257
 ] 

Vrushali C commented on YARN-6733:
--

bq. can you update subapplication row and column schema java doc. Looking into 
java, I initially thought column families does not have related and isRelated 
entities. Looking into implementation I found these are added!!

I will update the documentation..

bq. In SubApplicationRowKey, subAppUserId -> doAsUser? Where ever sugAppUserId, 
can you replace with doAsUser? The reason, the same will be using for offline 
collector also. Thought it is internal implementation, better lets use 
doAsUserId.
Actually, do you recollect, we all discussed this. Although the internal 
implementation right now is doAsUser in yarn, the concept is for 
sub-application user. Hence the name in timeline service is subAppUserId. Would 
like to keep it as subAppUserId if that's ok, but I can call out that 
subAppUserId is usually doAsUser in yarn in the comments/documentation. 





> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6798) NM startup failure with old state store due to version mismatch

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090253#comment-16090253
 ] 

Hadoop QA commented on YARN-6798:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
41s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 
53s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6798 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877635/YARN-6798.v2.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1a3c587bbb28 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0e78ae |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-YARN-Build/16468/artifact/patchprocess/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16468/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16468/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> NM startup failure with old state store due to version mismatch
> ---
>
> 

[jira] [Commented] (YARN-5731) Preemption calculation is not accurate when reserved containers are present in queue.

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090236#comment-16090236
 ] 

Hadoop QA commented on YARN-5731:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 24s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 9 new + 83 unchanged - 7 fixed = 92 total (was 90) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 32s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSAppStarvation |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5731 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877630/YARN-5731.addendum.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c3dbf74b20a1 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0e78ae |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/16466/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16466/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16466/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 

[jira] [Updated] (YARN-6798) NM startup failure with old state store due to version mismatch

2017-07-17 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated YARN-6798:
-
Attachment: YARN-6798.v2.patch

Updated Botong's patch with the newer version organization.

> NM startup failure with old state store due to version mismatch
> ---
>
> Key: YARN-6798
> URL: https://issues.apache.org/jira/browse/YARN-6798
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha4
>Reporter: Ray Chiang
>Assignee: Botong Huang
> Attachments: YARN-6798.v1.patch, YARN-6798.v2.patch
>
>
> YARN-6703 rolled back the state store version number for the RM from 2.0 to 
> 1.4.
> YARN-6127 bumped the version for the NM to 3.0
> private static final Version CURRENT_VERSION_INFO = 
> Version.newInstance(3, 0);
> YARN-5049 bumped the version for the NM to 2.0
> private static final Version CURRENT_VERSION_INFO = 
> Version.newInstance(2, 0);
> During an upgrade, all NMs died after upgrading a C6 cluster from alpha2 to 
> alpha4.
> {noformat}
> 2017-07-07 15:48:17,259 FATAL 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting 
> NodeManager
> org.apache.hadoop.service.ServiceStateException: java.io.IOException: 
> Incompatible version for NM state: expecting NM state version 3.0, but 
> loading version 2.0
> at 
> org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:246)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:307)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:748)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:809)
> Caused by: java.io.IOException: Incompatible version for NM state: expecting 
> NM state version 3.0, but loading version 2.0
> at 
> org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.checkVersion(NMLeveldbStateStoreService.java:1454)
> at 
> org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:1308)
> at 
> org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:307)
> at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
> ... 5 more
> 2017-07-07 15:48:17,277 INFO 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager: SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down NodeManager at xxx.gce.cloudera.com/aa.bb.cc.dd
> /
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6831) Miscellaneous refactoring changes of ContainScheduler

2017-07-17 Thread Haibo Chen (JIRA)
Haibo Chen created YARN-6831:


 Summary: Miscellaneous refactoring changes of ContainScheduler 
 Key: YARN-6831
 URL: https://issues.apache.org/jira/browse/YARN-6831
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Haibo Chen
Assignee: Haibo Chen






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5731) Preemption calculation is not accurate when reserved containers are present in queue.

2017-07-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090122#comment-16090122
 ] 

Sunil G commented on YARN-5731:
---

+1 on latest addendum patch. Committing tomorrow if there are no objections.

> Preemption calculation is not accurate when reserved containers are present 
> in queue.
> -
>
> Key: YARN-5731
> URL: https://issues.apache.org/jira/browse/YARN-5731
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0
>Reporter: Sunil G
>Assignee: Wangda Tan
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-5731.001.patch, YARN-5731.002.patch, 
> YARN-5731.addendum.003.patch, YARN-5731.addendum.004.patch, 
> YARN-5731.branch-2.002.patch, YARN-5731-branch-2.8.001.patch
>
>
> YARN Capacity Scheduler does not kick Preemption under below scenario.
> Two queues A and B each with 50% capacity and 100% maximum capacity and user 
> limit factor 2. Minimum Container size is 1536MB and total cluster resource 
> is 40GB. Now submit the first job which needs 1536MB for AM and 9 task 
> containers each 4.5GB to queue A. Job will get 8 containers total (AM 1536MB 
> + 7 * 4.5GB = 33GB) and the cluster usage is 93.8% and the job has reserved a 
> container of 4.5GB.
> Now when next job (1536MB for AM and 9 task containers each 4.5GB) is 
> submitted onto queue B. The job hangs in ACCEPTED state forever and RM 
> scheduler never kicks in Preemption. (RM UI Image 2 attached)
> Test Case:
> ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --queue A --executor-memory 4G --executor-cores 4 --num-executors 9 
> ../lib/spark-examples*.jar 100
> After a minute..
> ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --queue B --executor-memory 4G --executor-cores 4 --num-executors 9 
> ../lib/spark-examples*.jar 100
> Credit to: [~Prabhu Joseph] for bug investigation and troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5731) Preemption calculation is not accurate when reserved containers are present in queue.

2017-07-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5731:
-
Attachment: YARN-5731.addendum.004.patch

Offline suggested by [~sunilg], uploaded patch with updated config names. 

> Preemption calculation is not accurate when reserved containers are present 
> in queue.
> -
>
> Key: YARN-5731
> URL: https://issues.apache.org/jira/browse/YARN-5731
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0
>Reporter: Sunil G
>Assignee: Wangda Tan
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-5731.001.patch, YARN-5731.002.patch, 
> YARN-5731.addendum.003.patch, YARN-5731.addendum.004.patch, 
> YARN-5731.branch-2.002.patch, YARN-5731-branch-2.8.001.patch
>
>
> YARN Capacity Scheduler does not kick Preemption under below scenario.
> Two queues A and B each with 50% capacity and 100% maximum capacity and user 
> limit factor 2. Minimum Container size is 1536MB and total cluster resource 
> is 40GB. Now submit the first job which needs 1536MB for AM and 9 task 
> containers each 4.5GB to queue A. Job will get 8 containers total (AM 1536MB 
> + 7 * 4.5GB = 33GB) and the cluster usage is 93.8% and the job has reserved a 
> container of 4.5GB.
> Now when next job (1536MB for AM and 9 task containers each 4.5GB) is 
> submitted onto queue B. The job hangs in ACCEPTED state forever and RM 
> scheduler never kicks in Preemption. (RM UI Image 2 attached)
> Test Case:
> ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --queue A --executor-memory 4G --executor-cores 4 --num-executors 9 
> ../lib/spark-examples*.jar 100
> After a minute..
> ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --queue B --executor-memory 4G --executor-cores 4 --num-executors 9 
> ../lib/spark-examples*.jar 100
> Credit to: [~Prabhu Joseph] for bug investigation and troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6794) Explicitly promote OPPORTUNISITIC containers locally at the node where they're running

2017-07-17 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-6794:
-
Summary: Explicitly promote OPPORTUNISITIC containers locally at the node 
where they're running  (was: promote OPPORTUNISITC containers locally at the 
node where they're running)

> Explicitly promote OPPORTUNISITIC containers locally at the node where 
> they're running
> --
>
> Key: YARN-6794
> URL: https://issues.apache.org/jira/browse/YARN-6794
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, scheduler
>Reporter: Haibo Chen
>Assignee: Haibo Chen
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090103#comment-16090103
 ] 

Rohith Sharma K S commented on YARN-4455:
-

+1 lgtm.. I will commit it later of day if no more objections. cc :/ 
[~vrushalic] would you like to review final patch? 

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch, YARN-4455-YARN-5355.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5731) Preemption calculation is not accurate when reserved containers are present in queue.

2017-07-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-5731:
-
Attachment: YARN-5731.addendum.003.patch

Instead of making the logics to be hard coded, we discussed and decided to make 
the behavior configurable.

See javadocs of:
{{CapacitySchedulerConfiguration#ADDTIONAL_BALANCE_QUEUE_BASED_ON_RESERVED_RESOURCE}}
 for more details.

> Preemption calculation is not accurate when reserved containers are present 
> in queue.
> -
>
> Key: YARN-5731
> URL: https://issues.apache.org/jira/browse/YARN-5731
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0
>Reporter: Sunil G
>Assignee: Wangda Tan
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-5731.001.patch, YARN-5731.002.patch, 
> YARN-5731.addendum.003.patch, YARN-5731.branch-2.002.patch, 
> YARN-5731-branch-2.8.001.patch
>
>
> YARN Capacity Scheduler does not kick Preemption under below scenario.
> Two queues A and B each with 50% capacity and 100% maximum capacity and user 
> limit factor 2. Minimum Container size is 1536MB and total cluster resource 
> is 40GB. Now submit the first job which needs 1536MB for AM and 9 task 
> containers each 4.5GB to queue A. Job will get 8 containers total (AM 1536MB 
> + 7 * 4.5GB = 33GB) and the cluster usage is 93.8% and the job has reserved a 
> container of 4.5GB.
> Now when next job (1536MB for AM and 9 task containers each 4.5GB) is 
> submitted onto queue B. The job hangs in ACCEPTED state forever and RM 
> scheduler never kicks in Preemption. (RM UI Image 2 attached)
> Test Case:
> ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --queue A --executor-memory 4G --executor-cores 4 --num-executors 9 
> ../lib/spark-examples*.jar 100
> After a minute..
> ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --queue B --executor-memory 4G --executor-cores 4 --num-executors 9 
> ../lib/spark-examples*.jar 100
> Credit to: [~Prabhu Joseph] for bug investigation and troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5731) Preemption calculation is not accurate when reserved containers are present in queue.

2017-07-17 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan reopened YARN-5731:
--

> Preemption calculation is not accurate when reserved containers are present 
> in queue.
> -
>
> Key: YARN-5731
> URL: https://issues.apache.org/jira/browse/YARN-5731
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.8.0
>Reporter: Sunil G
>Assignee: Wangda Tan
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: YARN-5731.001.patch, YARN-5731.002.patch, 
> YARN-5731.branch-2.002.patch, YARN-5731-branch-2.8.001.patch
>
>
> YARN Capacity Scheduler does not kick Preemption under below scenario.
> Two queues A and B each with 50% capacity and 100% maximum capacity and user 
> limit factor 2. Minimum Container size is 1536MB and total cluster resource 
> is 40GB. Now submit the first job which needs 1536MB for AM and 9 task 
> containers each 4.5GB to queue A. Job will get 8 containers total (AM 1536MB 
> + 7 * 4.5GB = 33GB) and the cluster usage is 93.8% and the job has reserved a 
> container of 4.5GB.
> Now when next job (1536MB for AM and 9 task containers each 4.5GB) is 
> submitted onto queue B. The job hangs in ACCEPTED state forever and RM 
> scheduler never kicks in Preemption. (RM UI Image 2 attached)
> Test Case:
> ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --queue A --executor-memory 4G --executor-cores 4 --num-executors 9 
> ../lib/spark-examples*.jar 100
> After a minute..
> ./spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client 
> --queue B --executor-memory 4G --executor-cores 4 --num-executors 9 
> ../lib/spark-examples*.jar 100
> Credit to: [~Prabhu Joseph] for bug investigation and troubleshooting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6830) Support quoted strings for environment variables

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16090064#comment-16090064
 ] 

Hadoop QA commented on YARN-6830:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common: 
The patch generated 0 new + 5 unchanged - 2 fixed = 5 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  2m 30s{color} 
| {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 37s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.logaggregation.TestAggregatedLogDeletionService |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6830 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877609/YARN-6830.001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8bfae8dbb914 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0e78ae |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16463/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16463/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16463/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: 

[jira] [Updated] (YARN-5892) Support user-specific minimum user limit percentage in Capacity Scheduler

2017-07-17 Thread Eric Payne (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5892?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Payne updated YARN-5892:
-
Attachment: YARN-5892.branch-2.8.017.patch

Uploading YARN-5892.branch-2.8.017.patch because I used 
{{ConcurrentHashMap.newKeySet()}} in the previous patch, and that method 
doesn't exist in JDK 1.8.

Replaced it with new {{ConcurrentHashMap().keySet("dummy")}}

> Support user-specific minimum user limit percentage in Capacity Scheduler
> -
>
> Key: YARN-5892
> URL: https://issues.apache.org/jira/browse/YARN-5892
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: capacityscheduler
>Reporter: Eric Payne
>Assignee: Eric Payne
> Fix For: 3.0.0-alpha3
>
> Attachments: Active users highlighted.jpg, YARN-5892.001.patch, 
> YARN-5892.002.patch, YARN-5892.003.patch, YARN-5892.004.patch, 
> YARN-5892.005.patch, YARN-5892.006.patch, YARN-5892.007.patch, 
> YARN-5892.008.patch, YARN-5892.009.patch, YARN-5892.010.patch, 
> YARN-5892.012.patch, YARN-5892.013.patch, YARN-5892.014.patch, 
> YARN-5892.015.patch, YARN-5892.branch-2.015.patch, 
> YARN-5892.branch-2.016.patch, YARN-5892.branch-2.8.016.patch, 
> YARN-5892.branch-2.8.017.patch
>
>
> Currently, in the capacity scheduler, the {{minimum-user-limit-percent}} 
> property is per queue. A cluster admin should be able to set the minimum user 
> limit percent on a per-user basis within the queue.
> This functionality is needed so that when intra-queue preemption is enabled 
> (YARN-4945 / YARN-2113), some users can be deemed as more important than 
> other users, and resources from VIP users won't be as likely to be preempted.
> For example, if the {{getstuffdone}} queue has a MULP of 25 percent, but user 
> {{jane}} is a power user of queue {{getstuffdone}} and needs to be guaranteed 
> 75 percent, the properties for {{getstuffdone}} and {{jane}} would look like 
> this:
> {code}
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.minimum-user-limit-percent
> 25
>   
>   
> 
> yarn.scheduler.capacity.root.getstuffdone.jane.minimum-user-limit-percent
> 75
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6830) Support quoted strings for environment variables

2017-07-17 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-6830:
--
Attachment: YARN-6830.001.patch

> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-6830.001.patch
>
>
> There are cases where it is necessary to allow for quoted string literals 
> within environment variables values when passed via the yarn command line 
> interface.
> For example, consider the follow environment variables for a MR map task.
> {{MODE=bar}}
> {{IMAGE_NAME=foo}}
> {{MOUNTS=/tmp/foo,/tmp/bar}}
> When running the MR job, these environment variables are supplied as a comma 
> delimited string.
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> In this case, {{MOUNTS}} will be parsed and added to the task environment as 
> {{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
> results in quote characters becoming part of the value, and parsing still 
> breaks down at the comma.
> This issue is to allow for quoting the comma separated value (escaped double 
> or single quote). This was mentioned on YARN-4595 and will impact YARN-5534 
> as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6775) CapacityScheduler: Improvements to assignContainers, avoid unnecessary canAssignToUser/Queue calls

2017-07-17 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089936#comment-16089936
 ] 

Nathan Roberts commented on YARN-6775:
--

Attached screenshots that show a couple of before/after metrics. Change went 
active early on the 14th.
1) rmeventprocbusy is avg cpu busy of the Event Processor thread
2) rpceventprocessingtimeschedulerport is avg rpc processing time for the 
scheduler port.


> CapacityScheduler: Improvements to assignContainers, avoid unnecessary 
> canAssignToUser/Queue calls
> --
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Fix For: 3.0.0-beta1
>
> Attachments: rmeventprocbusy.png, rpcprocessingtimeschedulerport.png, 
> YARN-6775.001.patch, YARN-6775.002.patch, YARN-6775.branch-2.002.patch, 
> YARN-6775.branch-2.8.002.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089900#comment-16089900
 ] 

Hadoop QA commented on YARN-6130:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
41s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
53s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
13s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
18s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
15s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
YARN-5355 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
2s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in YARN-5355 has 5 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 in YARN-5355 has 8 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
43s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client in 
YARN-5355 has 2 extant Findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
49s{color} | {color:red} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app in 
YARN-5355 has 3 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
18s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
45s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 9 new + 393 unchanged 
- 2 fixed = 402 total (was 395) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} 

[jira] [Updated] (YARN-6775) CapacityScheduler: Improvements to assignContainers, avoid unnecessary canAssignToUser/Queue calls

2017-07-17 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated YARN-6775:
-
Attachment: rpcprocessingtimeschedulerport.png

> CapacityScheduler: Improvements to assignContainers, avoid unnecessary 
> canAssignToUser/Queue calls
> --
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Fix For: 3.0.0-beta1
>
> Attachments: rmeventprocbusy.png, rpcprocessingtimeschedulerport.png, 
> YARN-6775.001.patch, YARN-6775.002.patch, YARN-6775.branch-2.002.patch, 
> YARN-6775.branch-2.8.002.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6775) CapacityScheduler: Improvements to assignContainers, avoid unnecessary canAssignToUser/Queue calls

2017-07-17 Thread Nathan Roberts (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nathan Roberts updated YARN-6775:
-
Attachment: rmeventprocbusy.png

> CapacityScheduler: Improvements to assignContainers, avoid unnecessary 
> canAssignToUser/Queue calls
> --
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Fix For: 3.0.0-beta1
>
> Attachments: rmeventprocbusy.png, YARN-6775.001.patch, 
> YARN-6775.002.patch, YARN-6775.branch-2.002.patch, 
> YARN-6775.branch-2.8.002.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6775) CapacityScheduler: Improvements to assignContainers, avoid unnecessary canAssignToUser/Queue calls

2017-07-17 Thread Nathan Roberts (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089894#comment-16089894
 ] 

Nathan Roberts commented on YARN-6775:
--

[~leftnoteasy], I applied YARN-6775.branch-2.002.patch to branch 2 and 
YARN-6775.branch-2.8.002.patch to branch 2.8. I think they're ok. let me know 
if I'm missing something. 




> CapacityScheduler: Improvements to assignContainers, avoid unnecessary 
> canAssignToUser/Queue calls
> --
>
> Key: YARN-6775
> URL: https://issues.apache.org/jira/browse/YARN-6775
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Nathan Roberts
>Assignee: Nathan Roberts
> Fix For: 3.0.0-beta1
>
> Attachments: YARN-6775.001.patch, YARN-6775.002.patch, 
> YARN-6775.branch-2.002.patch, YARN-6775.branch-2.8.002.patch
>
>
> There are several things in assignContainers() that are done multiple times 
> even though the result cannot change (canAssignToUser, canAssignToQueue). Add 
> some local caching to take advantage of this fact.
> Will post patch shortly. Patch includes a simple throughput test that 
> demonstrates when we have users at their user-limit, the number of 
> NodeUpdateSchedulerEvents we can process can be improved from 13K/sec to 
> 50K/sec.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089857#comment-16089857
 ] 

Varun Saxena commented on YARN-4455:


Checkstyle issues cannot be fixed due to method params.

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch, YARN-4455-YARN-5355.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089813#comment-16089813
 ] 

Hadoop QA commented on YARN-4455:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} YARN-5355 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
42s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
15s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase
 in YARN-5355 has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} YARN-5355 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 41s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 2 new + 
42 unchanged - 5 fixed = 44 total (was 47) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
55s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase in the 
patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  5m 
24s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ac17dc |
| JIRA Issue | YARN-4455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877574/YARN-4455-YARN-5355.03.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  

[jira] [Commented] (YARN-6150) TestContainerManagerSecurity tests for Yarn Server are flakey

2017-07-17 Thread Sonia Garudi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089767#comment-16089767
 ] 

Sonia Garudi commented on YARN-6150:


[~rchiang] I have tested the latest patch and the tests pass with the changes 
made.  

> TestContainerManagerSecurity tests for Yarn Server are flakey
> -
>
> Key: YARN-6150
> URL: https://issues.apache.org/jira/browse/YARN-6150
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Daniel Sturman
>Assignee: Daniel Sturman
> Attachments: YARN-6150.001.patch, YARN-6150.002.patch, 
> YARN-6150.003.patch, YARN-6150.004.patch, YARN-6150.005.patch, 
> YARN-6150.006.patch, YARN-6150.007.patch
>
>
> Repeated runs of 
> {{org.apache.hadoop.yarn.server.TestContainerManagedSecurity}} can either 
> pass or fail on repeated runs on the same codebase.  Also, the two runs (one 
> in secure mode, one without security) aren't well labeled in JUnit.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow whitelisted volume mounts

2017-07-17 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089762#comment-16089762
 ] 

Daniel Templeton commented on YARN-5534:


I don't see any need to restrict the mount point in the container.

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6822) TestContainerManagerSecurity tests fail on trunk

2017-07-17 Thread Sonia Garudi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089759#comment-16089759
 ] 

Sonia Garudi commented on YARN-6822:


[~rchiang] I tested the latest patch from YARN-6150 and it seems to resolve the 
errors. 

> TestContainerManagerSecurity tests fail on trunk
> 
>
> Key: YARN-6822
> URL: https://issues.apache.org/jira/browse/YARN-6822
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
> Environment: Ubuntu 14.04 
> x86, ppc64le
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>
> {code}
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager[0]
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:130)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:348)
>   at com.sun.proxy.$Proxy91.startContainers(Unknown Source)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:557)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:478)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:253)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:158)
> {code}
> {code}
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager[1]
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.processWaitTimeAndRetryInfo(RetryInvocationHandler.java:130)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:107)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:348)
>   at com.sun.proxy.$Proxy91.startContainers(Unknown Source)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.startContainer(TestContainerManagerSecurity.java:557)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testStartContainer(TestContainerManagerSecurity.java:478)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testNMTokens(TestContainerManagerSecurity.java:253)
>   at 
> org.apache.hadoop.yarn.server.TestContainerManagerSecurity.testContainerManager(TestContainerManagerSecurity.java:158)
> {code}
> Logs -
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/463/testReport/org.apache.hadoop.yarn.server/TestContainerManagerSecurity/testContainerManager_0_/
> https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/463/testReport/org.apache.hadoop.yarn.server/TestContainerManagerSecurity/testContainerManager_1_/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow whitelisted volume mounts

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089755#comment-16089755
 ] 

Hadoop QA commented on YARN-5534:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
40s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
51s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 in trunk has 5 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
18s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
57s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 57s{color} 
| {color:red} hadoop-yarn in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 47s{color} | {color:orange} hadoop-yarn-project/hadoop-yarn: The patch 
generated 4 new + 210 unchanged - 0 fixed = 214 total (was 210) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
19s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 19s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m  4s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-5534 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12842850/YARN-5534.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux df770f559e03 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b0e78ae |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Updated] (YARN-4455) Support fetching metrics by time range

2017-07-17 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-4455:
---
Attachment: YARN-4455-YARN-5355.03.patch

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch, YARN-4455-YARN-5355.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089752#comment-16089752
 ] 

Varun Saxena commented on YARN-4455:


Thanks [~rohithsharma], will update a new patch fixing the nit pointed out above

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5534) Allow whitelisted volume mounts

2017-07-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089694#comment-16089694
 ] 

Shane Kumpf commented on YARN-5534:
---

[~ebadger] - sorry for the delay here. I'm actively working on this. 

Couple of comments on the approach:
# YARN-4595 addressed read-only mounts for local resources. I'm planning to 
consolidate the mount whitelist and local resource mounts into a single ENV 
variable.
# Local resources will be implicitly added to the whitelist in read-only mode.
# There is currently an issue with multiple mounts and MapReduce due to how 
environment variables are parsed. See YARN-6830.
# The admin will define a comma separated list of : (ro or rw) 
mounts, the requesting user will supply :: - mode must be 
equal to or lesser than the admin defined mode (i.e. admin defines mount as rw, 
user can bind mount as rw OR ro).

One question here, does any feel there is value in allowing the admin to 
restrict the destination mount point within the container? I can't think of a 
use case for this, and expect most admins would likely just wildcard the field 
for all the mounts. Currently, the plan for the admin supplied whitelist does 
not include restricting the destination within the container.

> Allow whitelisted volume mounts 
> 
>
> Key: YARN-5534
> URL: https://issues.apache.org/jira/browse/YARN-5534
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: luhuichun
>Assignee: Shane Kumpf
> Attachments: YARN-5534.001.patch, YARN-5534.002.patch
>
>
> Introduction 
> Mounting files or directories from the host is one way of passing 
> configuration and other information into a docker container. 
> We could allow the user to set a list of mounts in the environment of 
> ContainerLaunchContext (e.g. /dir1:/targetdir1,/dir2:/targetdir2). 
> These would be mounted read-only to the specified target locations. This has 
> been resolved in YARN-4595
> 2.Problem Definition
> Bug mounting arbitrary volumes into a Docker container can be a security risk.
> 3.Possible solutions
> one approach to provide safe mounts is to allow the cluster administrator to 
> configure a set of parent directories as white list mounting directories.
>  Add a property named yarn.nodemanager.volume-mounts.white-list, when 
> container executor do mount checking, only the allowed directories or 
> sub-directories can be mounted. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6830) Support quoted strings for environment variables

2017-07-17 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089675#comment-16089675
 ] 

Shane Kumpf commented on YARN-6830:
---

I've been looking into this as part of YARN-5534 and will take ownership.

> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>
> There are cases where it is necessary to allow for quoted string literals 
> within environment variables values when passed via the yarn command line 
> interface.
> For example, consider the follow environment variables for a MR map task.
> {{MODE=bar}}
> {{IMAGE_NAME=foo}}
> {{MOUNTS=/tmp/foo,/tmp/bar}}
> When running the MR job, these environment variables are supplied as a comma 
> delimited string.
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> In this case, {{MOUNTS}} will be parsed and added to the task environment as 
> {{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
> results in quote characters becoming part of the value, and parsing still 
> breaks down at the comma.
> This issue is to allow for quoting the comma separated value (escaped double 
> or single quote). This was mentioned on YARN-4595 and will impact YARN-5534 
> as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-6830) Support quoted strings for environment variables

2017-07-17 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf reassigned YARN-6830:
-

Assignee: Shane Kumpf

> Support quoted strings for environment variables
> 
>
> Key: YARN-6830
> URL: https://issues.apache.org/jira/browse/YARN-6830
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>
> There are cases where it is necessary to allow for quoted string literals 
> within environment variables values when passed via the yarn command line 
> interface.
> For example, consider the follow environment variables for a MR map task.
> {{MODE=bar}}
> {{IMAGE_NAME=foo}}
> {{MOUNTS=/tmp/foo,/tmp/bar}}
> When running the MR job, these environment variables are supplied as a comma 
> delimited string.
> {{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}
> In this case, {{MOUNTS}} will be parsed and added to the task environment as 
> {{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
> results in quote characters becoming part of the value, and parsing still 
> breaks down at the comma.
> This issue is to allow for quoting the comma separated value (escaped double 
> or single quote). This was mentioned on YARN-4595 and will impact YARN-5534 
> as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-6830) Support quoted strings for environment variables

2017-07-17 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-6830:
-

 Summary: Support quoted strings for environment variables
 Key: YARN-6830
 URL: https://issues.apache.org/jira/browse/YARN-6830
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Shane Kumpf


There are cases where it is necessary to allow for quoted string literals 
within environment variables values when passed via the yarn command line 
interface.

For example, consider the follow environment variables for a MR map task.

{{MODE=bar}}
{{IMAGE_NAME=foo}}
{{MOUNTS=/tmp/foo,/tmp/bar}}

When running the MR job, these environment variables are supplied as a comma 
delimited string.

{{-Dmapreduce.map.env="MODE=bar,IMAGE_NAME=foo,MOUNTS=/tmp/foo,/tmp/bar"}}

In this case, {{MOUNTS}} will be parsed and added to the task environment as 
{{MOUNTS=/tmp/foo}}. Any attempts to quote the embedded comma separated value 
results in quote characters becoming part of the value, and parsing still 
breaks down at the comma.

This issue is to allow for quoting the comma separated value (escaped double or 
single quote). This was mentioned on YARN-4595 and will impact YARN-5534 as 
well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089657#comment-16089657
 ] 

Rohith Sharma K S commented on YARN-4455:
-

Given already other filters such as *createdtimestart* and *createdtimeend* are 
exist, I am fine to go ahed with current approach. 

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6733) Add table for storing sub-application entities

2017-07-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089654#comment-16089654
 ] 

Rohith Sharma K S commented on YARN-6733:
-

I went through the whole patch, and overall patch is reasonable. 
nits:
# can you update subapplication row and column schema java doc. Looking into 
java, I initially thought column families does not have related and isRelated 
entities. Looking into implementation I found these are added!!
# In SubApplicationRowKey, subAppUserId -> doAsUser? Where ever sugAppUserId, 
can you replace with doAsUser? The reason, the same will be using for offline 
collector also. Thought it is internal implementation, better lets use 
doAsUserId. 

> Add table for storing sub-application entities
> --
>
> Key: YARN-6733
> URL: https://issues.apache.org/jira/browse/YARN-6733
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: IMG_7040.JPG, YARN-6733-YARN-5355.001.patch, 
> YARN-6733-YARN-5355.002.patch, YARN-6733-YARN-5355.003.patch, 
> YARN-6733-YARN-5355.004.patch
>
>
> After a discussion with Tez folks, we have been thinking over introducing a 
> table to store  sub-application information.
> For example, if a Tez session runs for a certain period as User X and runs a 
> few AMs. These AMs accept DAGs from other users. Tez will execute these dags 
> with a doAs user. ATSv2 should store this information in a new table perhaps 
> called as "sub_application" table. 
> This jira tracks the code changes needed for  table schema creation.
> I will file other jiras for writing to that table, updating the user name 
> fields to include sub-application user etc.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089645#comment-16089645
 ] 

Varun Saxena commented on YARN-6130:


Attaching a patch after addressing comments above.

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4455) Support fetching metrics by time range

2017-07-17 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089637#comment-16089637
 ] 

Varun Saxena commented on YARN-4455:


[~rohithsharma], your thoughts on above?

> Support fetching metrics by time range
> --
>
> Key: YARN-4455
> URL: https://issues.apache.org/jira/browse/YARN-4455
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Affects Versions: YARN-5355
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: YARN-5355, yarn-5355-merge-blocker
> Attachments: YARN-4455-YARN-5355.01.patch, 
> YARN-4455-YARN-5355.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-6130) [ATSv2 Security] Generate a delegation token for AM when app collector is created and pass it to AM via NM and RM

2017-07-17 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-6130:
---
Attachment: YARN-6130-YARN-5355.03.patch

> [ATSv2 Security] Generate a delegation token for AM when app collector is 
> created and pass it to AM via NM and RM
> -
>
> Key: YARN-6130
> URL: https://issues.apache.org/jira/browse/YARN-6130
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Varun Saxena
>Assignee: Varun Saxena
>  Labels: yarn-5355-merge-blocker
> Attachments: YARN-6130-YARN-5355.01.patch, 
> YARN-6130-YARN-5355.02.patch, YARN-6130-YARN-5355.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6825) RM quit due to ApplicationStateData exceed the limit size of znode in zk

2017-07-17 Thread Feng Yuan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089630#comment-16089630
 ] 

Feng Yuan commented on YARN-6825:
-

Hi all, if we could handle this problem like YARN-5006?
Do the same size check at applicationUpdate,attemptAdd,attemptUpdate operations 
like YARN-5006 do.


> RM quit due to ApplicationStateData exceed the limit size of znode in zk
> 
>
> Key: YARN-6825
> URL: https://issues.apache.org/jira/browse/YARN-6825
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>
> YARN-5006 fixes this issue by strict validation for ApplicationStateData 
> length against 1MB(default max jute buffer) during application submission 
> only. There is possibility of thrashing as dead zone was not properly 
> defined/taken care.
> But it do not consider scenarios where ApplicationStateData can be increased 
> later point of time i.e 
> # If app is submitted with less than 1MB during submission, later updated 
> like queue name or life time value or priority is changed. The app update 
> call will be sent to statestore which cause same issue because 
> ApplicationStateData length has increased.
> # Consider there is no app update, but final state are stored in ZK. This 
> adds up several fields such finishTime, finalState, finalApplicationState. 
> This increases size of ApplicationStateData.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6678) Committer thread crashes with IllegalStateException in async-scheduling mode of CapacityScheduler

2017-07-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089514#comment-16089514
 ] 

Hadoop QA commented on YARN-6678:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 42m 27s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | YARN-6678 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12877316/YARN-6678.005.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c03e686f4f3f 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 02b141a |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/16458/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/16458/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/16458/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Committer thread crashes with IllegalStateException in async-scheduling mode 
> of CapacityScheduler
> 

[jira] [Commented] (YARN-5146) [YARN-3368] Supports Fair Scheduler in new YARN UI

2017-07-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089500#comment-16089500
 ] 

Sunil G commented on YARN-5146:
---

Attaching trace. As far as I see, the construction of URL for GET query for 
queues are wrong. Its not taking rm address and other params. You might need to 
existing  other adapters which is using abstract adapter. We set address, 
cluster etc there.
{code}

VM190:1 GET http://localhost:4200/yarn-queue.yarn-queues 404 (Not Found)
(anonymous) @ VM190:1
send @ jquery.js:8630
ajax @ jquery.js:8166
(anonymous) @ rest-adapter.js:764
initializePromise @ ember.debug.js:52308
Promise @ ember.debug.js:54158
ajax @ rest-adapter.js:729
query @ rest-adapter.js:380
ember$data$lib$system$store$finders$$_query @ finders.js:144
query @ store.js:863
model @ cluster-overview.js:9
deserialize @ ember.debug.js:25918
applyHook @ ember.debug.js:52043
runSharedModelHook @ ember.debug.js:50251
getModel @ ember.debug.js:50167
(anonymous) @ ember.debug.js:51911
tryCatch @ ember.debug.js:52258
invokeCallback @ ember.debug.js:52273
publish @ ember.debug.js:52241
(anonymous) @ ember.debug.js:30835
invoke @ ember.debug.js:320
flush @ ember.debug.js:384
flush @ ember.debug.js:185
end @ ember.debug.js:563
run @ ember.debug.js:685
join @ ember.debug.js:705
run.join @ ember.debug.js:20147
(anonymous) @ ember.debug.js:20210
fire @ jquery.js:3099
fireWith @ jquery.js:3211
ready @ jquery.js:3417
completed @ jquery.js:3433
application.js:11 Error: Adapter operation failed
at ember$data$lib$adapters$errors$$AdapterError.EmberError 
(ember.debug.js:15860)
at ember$data$lib$adapters$errors$$AdapterError (errors.js:19)
at Class.handleResponse (rest-adapter.js:677)
at Class.hash.error (rest-adapter.js:757)
at fire (jquery.js:3099)
at Object.fireWith [as rejectWith] (jquery.js:3211)
at done (jquery.js:8266)
at XMLHttpRequest. (jquery.js:8605)
{code}

> [YARN-3368] Supports Fair Scheduler in new YARN UI
> --
>
> Key: YARN-5146
> URL: https://issues.apache.org/jira/browse/YARN-5146
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Wangda Tan
>Assignee: Abdullah Yousufi
> Attachments: YARN-5146.001.patch, YARN-5146.002.patch, 
> YARN-5146.003.patch
>
>
> Current implementation in branch YARN-3368 only support capacity scheduler,  
> we want to make it support fair scheduler. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6819) Application report fails if app rejected due to nodesize

2017-07-17 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089399#comment-16089399
 ] 

Sunil G commented on YARN-6819:
---

A minor nit.
{{APP_SAVE_REJECTED}} could be named as {{APP_SAVE_FAILED}}. Since its a store 
operation, its better to inform or handle as failure than a rejection. 

> Application report fails if app rejected due to nodesize
> 
>
> Key: YARN-6819
> URL: https://issues.apache.org/jira/browse/YARN-6819
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-6819.001.patch, YARN-6819.002.patch, 
> YARN-6819.003.patch
>
>
> In YARN-5006 application rejected when nodesize limit is exceeded. 
> {{FinalSavingTransition}} stateBeforeFinalSaving  not set after skipping save 
> to store which causes application report failure



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-6825) RM quit due to ApplicationStateData exceed the limit size of znode in zk

2017-07-17 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16089355#comment-16089355
 ] 

Rohith Sharma K S commented on YARN-6825:
-

[~bibinchundatt] would you like to provide a patch for this as  per previous 
comment? 

> RM quit due to ApplicationStateData exceed the limit size of znode in zk
> 
>
> Key: YARN-6825
> URL: https://issues.apache.org/jira/browse/YARN-6825
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>
> YARN-5006 fixes this issue by strict validation for ApplicationStateData 
> length against 1MB(default max jute buffer) during application submission 
> only. There is possibility of thrashing as dead zone was not properly 
> defined/taken care.
> But it do not consider scenarios where ApplicationStateData can be increased 
> later point of time i.e 
> # If app is submitted with less than 1MB during submission, later updated 
> like queue name or life time value or priority is changed. The app update 
> call will be sent to statestore which cause same issue because 
> ApplicationStateData length has increased.
> # Consider there is no app update, but final state are stored in ZK. This 
> adds up several fields such finishTime, finalState, finalApplicationState. 
> This increases size of ApplicationStateData.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org