[jira] [Commented] (YARN-4280) CapacityScheduler reservations may not prevent indefinite postponement on a busy cluster

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398671#comment-15398671
 ] 

Hadoop QA commented on YARN-4280:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 288 unchanged - 5 fixed = 289 total (was 293) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 24s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820864/YARN-4280.014.patch |
| JIRA Issue | YARN-4280 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c4924b124011 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8ebf2e9 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12558/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12558/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12558/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> CapacityScheduler reservations may not prevent indefinite postponement on a 
> busy cluster
> 

[jira] [Commented] (YARN-4888) Changes in RM container allocation for identifying resource-requests explicitly

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398647#comment-15398647
 ] 

Hadoop QA commented on YARN-4888:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 51s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 580 unchanged - 2 fixed = 581 total (was 582) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 54s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 32m 55s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820858/YARN-4888-v3.patch |
| JIRA Issue | YARN-4888 |
| Optional Tests |  asflicense  compile  cc  mvnsite  javac  unit  javadoc  
mvninstall  findbugs  checkstyle  |
| uname | Linux e16f9b6190ce 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision 

[jira] [Commented] (YARN-4997) Update fair scheduler to use pluggable auth provider

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398640#comment-15398640
 ] 

Hadoop QA commented on YARN-4997:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 47s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 55s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 8 
new + 321 unchanged - 3 fixed = 329 total (was 324) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 20s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 35s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 2s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
|  |  Incorrect lazy initialization of static field 
org.apache.hadoop.yarn.security.YarnAuthorizationProvider.authorizer in 
org.apache.hadoop.yarn.security.YarnAuthorizationProvider.destory()  At 
YarnAuthorizationProvider.java:field 
org.apache.hadoop.yarn.security.YarnAuthorizationProvider.authorizer in 
org.apache.hadoop.yarn.security.YarnAuthorizationProvider.destory()  At 
YarnAuthorizationProvider.java:[lines 65-67] |
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestFairSchedulerPlanFollower |
|   | hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSLeafQueue |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12819010/YARN-4997-001.patch |
| JIRA Issue | YARN-4997 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 

[jira] [Updated] (YARN-4280) CapacityScheduler reservations may not prevent indefinite postponement on a busy cluster

2016-07-28 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated YARN-4280:
--
Attachment: YARN-4280.014.patch

You are right [~jlowe]. This patch is an attempt to address that with an 
additional if check to update the assignment value only for the first 
QUEUE_LIMIT received. Does that seem reasonable?

Appreciate any comments/corrections. Thank you so much.

> CapacityScheduler reservations may not prevent indefinite postponement on a 
> busy cluster
> 
>
> Key: YARN-4280
> URL: https://issues.apache.org/jira/browse/YARN-4280
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.6.1, 2.8.0, 2.7.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-4280-branch-2.009.patch, 
> YARN-4280-branch-2.8.001.patch, YARN-4280-branch-2.8.002.patch, 
> YARN-4280-branch-2.8.003.patch, YARN-4280.001.patch, YARN-4280.002.patch, 
> YARN-4280.003.patch, YARN-4280.004.patch, YARN-4280.005.patch, 
> YARN-4280.006.patch, YARN-4280.007.patch, YARN-4280.008.patch, 
> YARN-4280.008_.patch, YARN-4280.009.patch, YARN-4280.010.patch, 
> YARN-4280.011.patch, YARN-4280.012.patch, YARN-4280.013.patch, 
> YARN-4280.014.patch
>
>
> Consider the following scenario:
> There are 2 queues A(25% of the total capacity) and B(75%), both can run at 
> total cluster capacity. There are 2 applications, appX that runs on Queue A, 
> always asking for 1G containers(non-AM) and appY runs on Queue B asking for 2 
> GB containers.
> The user limit is high enough for the application to reach 100% of the 
> cluster resource. 
> appX is running at total cluster capacity, full with 1G containers releasing 
> only one container at a time. appY comes in with a request of 2GB container 
> but only 1 GB is free. Ideally, since appY is in the underserved queue, it 
> has higher priority and should reserve for its 2 GB request. Since this 
> request puts the alloc+reserve above total capacity of the cluster, 
> reservation is not made. appX comes in with a 1GB request and since 1GB is 
> still available, the request is allocated. 
> This can continue indefinitely causing priority inversion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398600#comment-15398600
 ] 

Hadoop QA commented on YARN-5382:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 10 new + 214 unchanged - 1 fixed = 224 total (was 215) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 46s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 59s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820851/YARN-5382.07.patch |
| JIRA Issue | YARN-5382 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e00ada0eb0f3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8ebf2e9 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12555/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12555/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12555/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12555/console |
| Powered by | Apache Yetus 0.3.0   

[jira] [Commented] (YARN-5360) Decouple host user and Docker container user

2016-07-28 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398593#comment-15398593
 ] 

Sidharta Seethana commented on YARN-5360:
-

To clarify a few things here - yes, customized images are required in some 
cases (especially in secure mode) to make apps work for certain users. This is 
a limitation we have to work with for the moment given the hadoop security 
model - it may not be reasonable or practical to drop it altogether except 
under controlled situations. Also, log aggregation does not work in secure mode 
if you drop "--user" (it works in non-secure mode, I think but I'll have to 
check the code/test again). Artifact deletion will not work if the artifacts 
are created as a different user in the docker container (artifact cleanup is 
done as the 'run as' user). 

In your first table above, the yarn/nobody case likely did not work because the 
sequenceiq image is based on centos (nobody uid=99) and the system you were 
testing on was not centos (ubuntu? nobody uid=65534). We have tested spark with 
other images on centos (I have test images on docker hub if you'd like to try). 
I am pretty sure [~templedf] has successfully run spark using the current 
implementation as well. ([~templedf] : please confirm) .  

If this the discussion here is now only about dropping the "--user" in certain 
cases, this is captured in YARN-4266. 




 

> Decouple host user and Docker container user
> 
>
> Key: YARN-5360
> URL: https://issues.apache.org/jira/browse/YARN-5360
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>
> There is *a dependency between job submitting user and the user in the Docker 
> image* in LCE currently. For instance, in order to run the Docker container 
> as yarn user, we can choose set the 
> "yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user" to yarn 
> and leave 
> "yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users" 
> default (true). Then LCE will choose yarn ( UID maybe 1001) as the user 
> running jobs.
> LCE will mount the generated launch_container.sh (owned by the running job 
> user) and /etc/passwd (*current the code is mounting to container's 
> /etc/password, I think it's a mistake*) into the Docker container and 
> utilizes "docker run --user=" option to get it done internally.
> Mounting /etc/passwd to the container is a not good choice due to override 
> original users defined in Docker image. As far as I know, since Docker v1.8 
> (or maybe earlier), the Docker run command "--user=" option accepts UID and 
> *when passing UID, the user does not have to exist in the container*. So we 
> could use UID instead of user name to construct the Docker run command to 
> eliminate the dependency that create the same user in the Docker image. This 
> enables LCE the ability to launch any Docker container safely regardless what 
> users in it.
> But this is not enough to decouple host user and Docker container user. The 
> final solution we are searching for are focused on allowing users to run 
> their Docker images flexibly without involving dependencies of YARN and make 
> sure the container won't bring in security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5425) TestDirectoryCollection.testCreateDirectories failed

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398578#comment-15398578
 ] 

Hadoop QA commented on YARN-5425:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 47s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
24s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 36 unchanged - 1 fixed = 36 total (was 37) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 54s {color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 39m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.net.TestClusterTopology |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820693/YARN-5425.00.patch |
| JIRA Issue | YARN-5425 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a132cfb944c4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7086fc7 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12554/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12554/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12554/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12554/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> TestDirectoryCollection.testCreateDirectories failed
> 

[jira] [Updated] (YARN-4888) Changes in RM container allocation for identifying resource-requests explicitly

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4888:
-
Attachment: YARN-4888-v3.patch

Attaching v3 patch that fixes the test case failures and checkstyle issues.

> Changes in RM container allocation for identifying resource-requests 
> explicitly
> ---
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4888-WIP.patch, YARN-4888-v0.patch, 
> YARN-4888-v2.patch, YARN-4888-v3.patch, YARN-4888.001.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-07-28 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398569#comment-15398569
 ] 

Rohith Sharma K S commented on YARN-5287:
-

The current change look good to me. And as I am not very much comfortable in C 
code, I would like to hear other folks opinion also. 
[~xgong] Would you  help us to review this patch please?

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287-tmp.patch, YARN-5287.003.patch, 
> YARN-5287.004.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5360) Decouple host user and Docker container user

2016-07-28 Thread Zhankun Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398563#comment-15398563
 ] 

Zhankun Tang commented on YARN-5360:


[~sidharta-s], 
The first testing result table shows us existing Docker image won't work if no 
corret user/uid created in it. What I cannot comprehend here is that you said 
"dropping --user" breaks existing use case. How did the users adopt LCE Docker? 
Added same user/UID in Docker image to work through it?

Yes. Running Docker user won't approve the whole thing. And I agree with you 
that we shouldn't have different default behaviors. We should find a better 
solution.



> Decouple host user and Docker container user
> 
>
> Key: YARN-5360
> URL: https://issues.apache.org/jira/browse/YARN-5360
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>
> There is *a dependency between job submitting user and the user in the Docker 
> image* in LCE currently. For instance, in order to run the Docker container 
> as yarn user, we can choose set the 
> "yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user" to yarn 
> and leave 
> "yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users" 
> default (true). Then LCE will choose yarn ( UID maybe 1001) as the user 
> running jobs.
> LCE will mount the generated launch_container.sh (owned by the running job 
> user) and /etc/passwd (*current the code is mounting to container's 
> /etc/password, I think it's a mistake*) into the Docker container and 
> utilizes "docker run --user=" option to get it done internally.
> Mounting /etc/passwd to the container is a not good choice due to override 
> original users defined in Docker image. As far as I know, since Docker v1.8 
> (or maybe earlier), the Docker run command "--user=" option accepts UID and 
> *when passing UID, the user does not have to exist in the container*. So we 
> could use UID instead of user name to construct the Docker run command to 
> eliminate the dependency that create the same user in the Docker image. This 
> enables LCE the ability to launch any Docker container safely regardless what 
> users in it.
> But this is not enough to decouple host user and Docker container user. The 
> final solution we are searching for are focused on allowing users to run 
> their Docker images flexibly without involving dependencies of YARN and make 
> sure the container won't bring in security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-07-28 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397800#comment-15397800
 ] 

sandflee edited comment on YARN-5375 at 7/29/16 1:28 AM:
-

update the patch to fix test failures.
bq. 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
scheduler event handler throws exceptions cause test failed, disable 
Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY in drainDispatcher like Scheduler event 
dispatcher.
bq. org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes
MockRM is not started, so waitForState will be blocked, just start the rm.



was (Author: sandflee):
update the patch to fix test failures.
bq. 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
scheduler event handle exceptions cause test failed, disable 
Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY in drainDispatcher
bq. org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes
MockRM is not started, so waitForState will be blocked, just start the rm.


> invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures
> --
>
> Key: YARN-5375
> URL: https://issues.apache.org/jira/browse/YARN-5375
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-5375.01.patch, YARN-5375.03.patch, 
> YARN-5375.04.patch, YARN-5375.05.patch
>
>
> seen many test failures related to RMApp/RMAppattempt comes to some state but 
> some event are not processed in rm event queue or scheduler event queue, 
> cause test failure, seems we could implicitly invokes drainEvents(should also 
> drain sheduler event) in some mockRM method like waitForState



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5382) RM does not audit log kill request for active applications

2016-07-28 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5382:
-
Attachment: YARN-5382.07.patch

The previous patch v06 was incomplete. Uploading new patch v7 which includes 
the missing class

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch, 
> YARN-5382.06.patch, YARN-5382.07.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4888) Changes in RM container allocation for identifying resource-requests explicitly

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398543#comment-15398543
 ] 

Hadoop QA commented on YARN-4888:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 46s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 47s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 5 
new + 577 unchanged - 2 fixed = 582 total (was 579) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 21s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 47s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 85m 36s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
|   | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
|   | hadoop.yarn.server.resourcemanager.TestResourceManager |
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
|   | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicy
 |
|   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestReservations |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerResizing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| 

[jira] [Assigned] (YARN-5425) TestDirectoryCollection.testCreateDirectories failed

2016-07-28 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned YARN-5425:
---

Assignee: Akira Ajisaka  (was: Yufei Gu)

> TestDirectoryCollection.testCreateDirectories failed
> 
>
> Key: YARN-5425
> URL: https://issues.apache.org/jira/browse/YARN-5425
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Akira Ajisaka
> Attachments: YARN-5425.00.patch
>
>
> {code}
> java.lang.AssertionError: local dir parent not created with proper 
> permissions expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.TestDirectoryCollection.testCreateDirectories(TestDirectoryCollection.java:113)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5425) TestDirectoryCollection.testCreateDirectories failed

2016-07-28 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398531#comment-15398531
 ] 

Akira Ajisaka commented on YARN-5425:
-

Thanks [~boky01] and [~yufeigu] for the comments. Submitting patch.

> TestDirectoryCollection.testCreateDirectories failed
> 
>
> Key: YARN-5425
> URL: https://issues.apache.org/jira/browse/YARN-5425
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.9.0
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5425.00.patch
>
>
> {code}
> java.lang.AssertionError: local dir parent not created with proper 
> permissions expected: but was:
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.TestDirectoryCollection.testCreateDirectories(TestDirectoryCollection.java:113)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5435) [Regression] QueueCapacities not being updated for dynamic ReservationQueue

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398524#comment-15398524
 ] 

Hadoop QA commented on YARN-5435:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 46s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 46s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 56s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820838/YARN-5435.v2.patch |
| JIRA Issue | YARN-5435 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 46d0fb6ecebd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7086fc7 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12553/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12553/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [Regression] QueueCapacities not being updated for dynamic ReservationQueue
> ---
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
>  Labels: regression
> 

[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398509#comment-15398509
 ] 

Allen Wittenauer commented on YARN-5121:


asf license issues are unrelated. unit test failures are unrelated. and the new 
compiler warning is trivial to fix on commit.

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, YARN-5121.06.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3876) get_executable() assumes everything is Linux

2016-07-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398508#comment-15398508
 ] 

Allen Wittenauer commented on YARN-3876:


FYI: I've got a patch to fix this and a few other things in YARN-5121.

> get_executable() assumes everything is Linux
> 
>
> Key: YARN-3876
> URL: https://issues.apache.org/jira/browse/YARN-3876
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.7.0
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: YARN-3876.001.patch, YARN-3876.002.patch, 
> YARN-3876.003.patch
>
>
> get_executable() in container-executor.c is non-portable and is hard-coded to 
> assume Linux's /proc. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5435) [Regression] QueueCapacities not being updated for dynamic ReservationQueue

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5435:
-
Labels: regression  (was: )

> [Regression] QueueCapacities not being updated for dynamic ReservationQueue
> ---
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
>  Labels: regression
> Attachments: YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> YARN-1707 added dynamic queues (ReservationQueue) to CapacityScheduler. The 
> QueueCapacities data structure was added subsequently but is not being 
> updated correctly for ReservationQueue. This JIRA tracks the changes required 
> to update QueueCapacities of ReservationQueue correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5435) [Regression] QueueCapacities not being updated for dynamic ReservationQueue

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5435:
-
Summary: [Regression] QueueCapacities not being updated for dynamic 
ReservationQueue  (was: QueueCapacities not being updated for dynamic 
ReservationQueue)

> [Regression] QueueCapacities not being updated for dynamic ReservationQueue
> ---
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> YARN-1707 added dynamic queues (ReservationQueue) to CapacityScheduler. The 
> QueueCapacities data structure was added subsequently but is not being 
> updated correctly for ReservationQueue. This JIRA tracks the changes required 
> to update QueueCapacities of ReservationQueue correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5435) QueueCapacities not being updated for dynamic ReservationQueue

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5435:
-
Description: YARN-1707 added dynamic queues (ReservationQueue) to 
CapacityScheduler. The QueueCapacities data structure was added subsequently 
but is not being updated correctly for ReservationQueue. This JIRA tracks the 
changes required to update QueueCapacities of ReservationQueue correctly.  
(was: YARN-1707 added dynamic queues ( to CapacityScheduler. The 
QueueCapacities data structure was added subsequently but is not being updated 
correctly for )

> QueueCapacities not being updated for dynamic ReservationQueue
> --
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> YARN-1707 added dynamic queues (ReservationQueue) to CapacityScheduler. The 
> QueueCapacities data structure was added subsequently but is not being 
> updated correctly for ReservationQueue. This JIRA tracks the changes required 
> to update QueueCapacities of ReservationQueue correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5435) QueueCapacities not being updated for dynamic ReservationQueue

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5435:
-
Description: YARN-1707 added dynamic queues ( to CapacityScheduler. The 
QueueCapacities data structure was added subsequently but is not being updated 
correctly for   (was: YARN-1707 added dynamic queues to CapacityScheduler. The 
QueueCapacities data structure was added subsequently but is not being updated 
correctly for )

> QueueCapacities not being updated for dynamic ReservationQueue
> --
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> YARN-1707 added dynamic queues ( to CapacityScheduler. The QueueCapacities 
> data structure was added subsequently but is not being updated correctly for 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5435) QueueCapacities not being updated for dynamic ReservationQueue

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5435:
-
Description: YARN-1707 added dynamic queues to CapacityScheduler. The 
QueueCapacities data structure was added subsequently but is not being updated 
correctly for   (was: YARN-1707 added dynamic queues to CapacityScheduler. )

> QueueCapacities not being updated for dynamic ReservationQueue
> --
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> YARN-1707 added dynamic queues to CapacityScheduler. The QueueCapacities data 
> structure was added subsequently but is not being updated correctly for 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5435) QueueCapacities not being updated for dynamic ReservationQueue

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5435:
-
Description: YARN-1707 added dynamic queues to CapacityScheduler.   (was: 
After a reservation queue is allocated, and the plan is synchronized by the 
plan follower, we expect the user am resource limit to reflect this change if 
the appropriate configuration is set.

Instead, the user am resource limit is always the same. As a result, multiple 
AM cannot run in parallel for small reservations.

To reproduce this issue, create a reservation, and submit multiple applications 
against the new reservation.)

> QueueCapacities not being updated for dynamic ReservationQueue
> --
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> YARN-1707 added dynamic queues to CapacityScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5435) QueueCapacities not being updated for dynamic ReservationQueue

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5435:
-
Summary: QueueCapacities not being updated for dynamic ReservationQueue  
(was:  [Regression]User AM resource limit does not get updated for 
ReservationQueue)

> QueueCapacities not being updated for dynamic ReservationQueue
> --
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> After a reservation queue is allocated, and the plan is synchronized by the 
> plan follower, we expect the user am resource limit to reflect this change if 
> the appropriate configuration is set.
> Instead, the user am resource limit is always the same. As a result, multiple 
> AM cannot run in parallel for small reservations.
> To reproduce this issue, create a reservation, and submit multiple 
> applications against the new reservation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5436) Race in AsyncDispatcher can cause random test failures in Tez (probably YARN also)

2016-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398491#comment-15398491
 ] 

Hudson commented on YARN-5436:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10175 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10175/])
YARN-5436. Race in AsyncDispatcher can cause random test failures in Tez 
(gtcarrera9: rev 7086fc72eebc41fd174d91839ed703c014aac920)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/event/AsyncDispatcher.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/event/DrainDispatcher.java


> Race in AsyncDispatcher can cause random test failures in Tez (probably YARN 
> also)
> --
>
> Key: YARN-5436
> URL: https://issues.apache.org/jira/browse/YARN-5436
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5436.1.patch, YARN-5436.2.patch, YARN-5436.3.patch, 
> YARN-5436.4.patch
>
>
> In YARN-2264, a race in DrainDispatcher was fixed. Unfortunately, it also 
> exists in AsyncDispatcher (this was found and ignored in YARN-3878 but never 
> documented...). In YARN-2991, another DrainDispatcher bug was fixed by 
> letting DrainDispatcher reuse some AsyncDispatcher method because 
> AsyncDispatcher doesn't have such issue. However, this shadows YARN-2264, and 
> now similar race reappears in Tez unit tests (probably also YARN unit tests 
> also).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5435) [Regression]User AM resource limit does not get updated for ReservationQueue

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5435:
-
Summary:  [Regression]User AM resource limit does not get updated for 
ReservationQueue  (was:  User AM resource limit for does not get updated for 
ReservationQueue after execution of plan-follower.)

>  [Regression]User AM resource limit does not get updated for ReservationQueue
> -
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> After a reservation queue is allocated, and the plan is synchronized by the 
> plan follower, we expect the user am resource limit to reflect this change if 
> the appropriate configuration is set.
> Instead, the user am resource limit is always the same. As a result, multiple 
> AM cannot run in parallel for small reservations.
> To reproduce this issue, create a reservation, and submit multiple 
> applications against the new reservation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-07-28 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398477#comment-15398477
 ] 

Konstantinos Karanasos commented on YARN-5113:
--

Most failing tests are unrelated.
The only related one is {{TestDistributedScheduling}}, which however runs 
successfully locally.

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch, 
> YARN-5113.003.patch, YARN-5113.004.patch, YARN-5113.005.patch, 
> YARN-5113.006.patch, YARN-5113.007.patch, YARN-5113.008.patch
>
>
> This JIRA focuses on the refactoring of classes related to Distributed 
> Scheduling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398467#comment-15398467
 ] 

Hadoop QA commented on YARN-5429:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api: 
The patch generated 0 new + 100 unchanged - 1 fixed = 100 total (was 101) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 12s 
{color} | {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 49s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820834/YARN-5429.03.patch |
| JIRA Issue | YARN-5429 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fbe8ce1daf1a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d9aae22 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12551/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12551/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12551/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
>  

[jira] [Updated] (YARN-5435) User AM resource limit for does not get updated for ReservationQueue after execution of plan-follower.

2016-07-28 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5435:
--
Attachment: YARN-5435.v2.patch

YARN-5435.v2.patch fixes checkstyle issues, and the all test failures.

>  User AM resource limit for does not get updated for ReservationQueue after 
> execution of plan-follower.
> ---
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> After a reservation queue is allocated, and the plan is synchronized by the 
> plan follower, we expect the user am resource limit to reflect this change if 
> the appropriate configuration is set.
> Instead, the user am resource limit is always the same. As a result, multiple 
> AM cannot run in parallel for small reservations.
> To reproduce this issue, create a reservation, and submit multiple 
> applications against the new reservation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4888) Changes in RM container allocation for identifying resource-requests explicitly

2016-07-28 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398455#comment-15398455
 ] 

Subru Krishnan edited comment on YARN-4888 at 7/28/16 11:58 PM:


Clean patch (*v2*) with new tests for checking allocation request ids for both 
{{Capacity/FairScheduler}} post rebase with YARN-5392.

The patch is complete and covers e2e scenarios except for:
  1. Having multiple node/rack local requests for same *Priority* by using 
different *AllocationRequestIds*. This is essentially the remaining work for 
YARN-314.
  2. Recreating *AllocationRequestId* for recovered *Containers* if RM fails 
over. This might cause an issue only if RM fails over before notifying the AM 
about newly allocated container(s). I have created YARN-5447 to track this.



was (Author: subru):
Clean patch with new tests for checking allocation request ids for both 
{{Capacity/FairScheduler}} post rebase with YARN-5392.

The patch is complete and covers e2e scenarios except for:
  1. Having multiple node/rack local requests for same *Priority* by using 
different *AllocationRequestIds*. This is essentially the remaining work for 
YARN-314.
  2. Recreating *AllocationRequestId* for recovered *Containers* if RM fails 
over. This might cause an issue only if RM fails over before notifying the AM 
about newly allocated container(s). I have created YARN-5447 to track this.


> Changes in RM container allocation for identifying resource-requests 
> explicitly
> ---
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4888-WIP.patch, YARN-4888-v0.patch, 
> YARN-4888-v2.patch, YARN-4888.001.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5447) Consider including allocationRequestId in NMContainerStatus to allow recovery in case of RM failover

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5447:
-
Description: We have added a mapping of the allocated container to the 
original request through YARN-4887/YARN-4888. There is a corner case in which 
the mapping will be lost, i.e. if RM fails over before notifying the AM about 
newly allocated container(s). This JIRA tracks the changes required to include 
the allocationRequestId in NMContainerStatus to allow recovery in case of RM 
failover.  (was: We have added a mapping of the allocated container to the 
original request through YARN-4887/YARN-4888. This JIRA tracks the changes 
required to include the allocationRequestId in NMContainerStatus to allow 
recovery in case of RM failover.)

> Consider including allocationRequestId in NMContainerStatus to allow recovery 
> in case of RM failover
> 
>
> Key: YARN-5447
> URL: https://issues.apache.org/jira/browse/YARN-5447
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: applications, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
>
> We have added a mapping of the allocated container to the original request 
> through YARN-4887/YARN-4888. There is a corner case in which the mapping will 
> be lost, i.e. if RM fails over before notifying the AM about newly allocated 
> container(s). This JIRA tracks the changes required to include the 
> allocationRequestId in NMContainerStatus to allow recovery in case of RM 
> failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4888) Changes in RM container allocation for identifying resource-requests explicitly

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4888:
-
Summary: Changes in RM container allocation for identifying 
resource-requests explicitly  (was: Changes in RM AppSchedulingInfo for 
identifying resource-requests explicitly)

> Changes in RM container allocation for identifying resource-requests 
> explicitly
> ---
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4888-WIP.patch, YARN-4888-v0.patch, 
> YARN-4888-v2.patch, YARN-4888.001.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4888) Changes in RM AppSchedulingInfo for identifying resource-requests explicitly

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-4888:
-
Attachment: YARN-4888-v2.patch

Clean patch with new tests for checking allocation request ids for both 
{{Capacity/FairScheduler}} post rebase with YARN-5392.

The patch is complete and covers e2e scenarios except for:
  1. Having multiple node/rack local requests for same *Priority* by using 
different *AllocationRequestIds*. This is essentially the remaining work for 
YARN-314.
  2. Recreating *AllocationRequestId* for recovered *Containers* if RM fails 
over. This might cause an issue only if RM fails over before notifying the AM 
about newly allocated container(s). I have created YARN-5447 to track this.


> Changes in RM AppSchedulingInfo for identifying resource-requests explicitly
> 
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4888-WIP.patch, YARN-4888-v0.patch, 
> YARN-4888-v2.patch, YARN-4888.001.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5447) Consider including allocationRequestId in NMContainerStatus to allow recovery in case of RM failover

2016-07-28 Thread Subru Krishnan (JIRA)
Subru Krishnan created YARN-5447:


 Summary: Consider including allocationRequestId in 
NMContainerStatus to allow recovery in case of RM failover
 Key: YARN-5447
 URL: https://issues.apache.org/jira/browse/YARN-5447
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Subru Krishnan
Assignee: Subru Krishnan


We have added a mapping of the allocated container to the original request 
through YARN-4887/YARN-4888. This JIRA tracks the changes required to include 
the allocationRequestId in NMContainerStatus to allow recovery in case of RM 
failover.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398453#comment-15398453
 ] 

Hadoop QA commented on YARN-5121:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 51s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 58s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 7m 27s {color} | 
{color:red} root generated 1 new + 8 unchanged - 2 fixed = 9 total (was 10) 
{color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 56s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 112m 2s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 176m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestErasureCodeBenchmarkThroughput |
|   | hadoop.hdfs.server.namenode.TestNameNodeMetadataConsistency |
|   | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
|   | hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820790/YARN-5121.06.patch |
| JIRA Issue | YARN-5121 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux 489bb4ed891f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a3d0cba |
| Default Java | 1.8.0_101 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12547/artifact/patchprocess/diff-compile-cc-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12547/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12547/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12547/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/12547/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 . U: . |
| Console output | 

[jira] [Updated] (YARN-5436) Race in AsyncDispatcher can cause random test failures in Tez (probably YARN also)

2016-07-28 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5436:

Summary: Race in AsyncDispatcher can cause random test failures in Tez 
(probably YARN also)  (was: Race in AsyncDispatcher can cause random test 
failures in Tez(probably YARN also ))

> Race in AsyncDispatcher can cause random test failures in Tez (probably YARN 
> also)
> --
>
> Key: YARN-5436
> URL: https://issues.apache.org/jira/browse/YARN-5436
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Attachments: YARN-5436.1.patch, YARN-5436.2.patch, YARN-5436.3.patch, 
> YARN-5436.4.patch
>
>
> In YARN-2264, a race in DrainDispatcher was fixed. Unfortunately, it also 
> exists in AsyncDispatcher (this was found and ignored in YARN-3878 but never 
> documented...). In YARN-2991, another DrainDispatcher bug was fixed by 
> letting DrainDispatcher reuse some AsyncDispatcher method because 
> AsyncDispatcher doesn't have such issue. However, this shadows YARN-2264, and 
> now similar race reappears in Tez unit tests (probably also YARN unit tests 
> also).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5436) Race in AsyncDispatcher can cause random test failures in Tez(probably YARN also )

2016-07-28 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398440#comment-15398440
 ] 

Li Lu commented on YARN-5436:
-

Will commit this patch shortly. 

> Race in AsyncDispatcher can cause random test failures in Tez(probably YARN 
> also )
> --
>
> Key: YARN-5436
> URL: https://issues.apache.org/jira/browse/YARN-5436
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Attachments: YARN-5436.1.patch, YARN-5436.2.patch, YARN-5436.3.patch, 
> YARN-5436.4.patch
>
>
> In YARN-2264, a race in DrainDispatcher was fixed. Unfortunately, it also 
> exists in AsyncDispatcher (this was found and ignored in YARN-3878 but never 
> documented...). In YARN-2991, another DrainDispatcher bug was fixed by 
> letting DrainDispatcher reuse some AsyncDispatcher method because 
> AsyncDispatcher doesn't have such issue. However, this shadows YARN-2264, and 
> now similar race reappears in Tez unit tests (probably also YARN unit tests 
> also).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5203) Return ResourceRequest JAXB object in ResourceManager Cluster Applications REST API

2016-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398433#comment-15398433
 ] 

Hudson commented on YARN-5203:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10173 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10173/])
YARN-5203.Return ResourceRequest JAXB object in ResourceManager Cluster (subru: 
rev 4e756d72719ec3c6d64a1e3daccbc0b8e8de998c)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ResourceInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesApps.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockRM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ResourceRequestInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ExecutionTypeRequestInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMAppAttemptBlock.java


> Return ResourceRequest JAXB object in ResourceManager Cluster Applications 
> REST API
> ---
>
> Key: YARN-5203
> URL: https://issues.apache.org/jira/browse/YARN-5203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
>  Labels: api-breaking, bug, incompatible
> Fix For: 2.9.0
>
> Attachments: YARN-5203.v0.patch, YARN-5203.v1.patch, 
> YARN-5203.v2.patch, YARN-5203.v3.patch, YARN-5203.v4.patch, YARN-5203.v5.patch
>
>
> The ResourceManager Cluster Applications REST API returns {{ResourceRequest}} 
> as String rather than a JAXB object. This prevents downstream tools like 
> Federation Router (YARN-3659) that depend on the REST API to unmarshall the 
> {{AppInfo}}. This JIRA proposes updating {{AppInfo}} to return a JAXB version 
> of the {{ResourceRequest}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398430#comment-15398430
 ] 

Hadoop QA commented on YARN-5382:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 19s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 9 new + 214 unchanged - 1 fixed = 223 total (was 215) 
{color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 1 new + 963 unchanged - 0 fixed = 964 total (was 963) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 18s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 40s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820819/YARN-5382.06.patch |
| JIRA Issue | YARN-5382 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 59ee83361456 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ce3d68e |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-YARN-Build/12550/artifact/patchprocess/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/12550/artifact/patchprocess/patch-compile-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javac | 

[jira] [Updated] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-07-28 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5429:
-
Attachment: YARN-5429.03.patch

> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5429.01.patch, YARN-5429.02.patch, 
> YARN-5429.03.patch
>
>
> As part of YARN-4977, filing a subtask to fix a subset of the javadoc 
> warnings in yarn-api.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-07-28 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5429:
-
Attachment: YARN-5429.02.patch

Thanks [~templedf], uploading patch v3 with changes to classnames included with 
"{@link .. }" and updated the other javadoc if/else ones with commas. 
 

> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5429.01.patch, YARN-5429.02.patch
>
>
> As part of YARN-4977, filing a subtask to fix a subset of the javadoc 
> warnings in yarn-api.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-07-28 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398427#comment-15398427
 ] 

Vrushali C edited comment on YARN-5429 at 7/28/16 11:37 PM:


Thanks [~templedf], uploading patch v3 with changes to classnames included with 
{code}  {@link .. } {code} and updated the other javadoc if/else ones with 
commas. 
 


was (Author: vrushalic):
Thanks [~templedf], uploading patch v3 with changes to classnames included with 
"{@link .. }" and updated the other javadoc if/else ones with commas. 
 

> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5429.01.patch, YARN-5429.02.patch
>
>
> As part of YARN-4977, filing a subtask to fix a subset of the javadoc 
> warnings in yarn-api.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-07-28 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5429:
-
Attachment: (was: YARN-5429.02.patch)

> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
> Attachments: YARN-5429.01.patch, YARN-5429.02.patch
>
>
> As part of YARN-4977, filing a subtask to fix a subset of the javadoc 
> warnings in yarn-api.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 7s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
6s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 49s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 3m 49s {color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 49s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 42s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 170m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820784/YARN-5121.05.patch |
| JIRA Issue | YARN-5121 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux c327eee68124 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 328c855 |
| Default Java | 1.8.0_101 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-compile-root.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-compile-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 

[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398410#comment-15398410
 ] 

Hadoop QA commented on YARN-5121:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 7s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 
6s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 49s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 3m 49s {color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 49s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 42s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 170m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestFileChecksum |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820784/YARN-5121.05.patch |
| JIRA Issue | YARN-5121 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux c327eee68124 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 328c855 |
| Default Java | 1.8.0_101 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-compile-root.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-compile-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/12546/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 

[jira] [Commented] (YARN-5369) Improve Yarn logs command to get container logs based on Node Id

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398406#comment-15398406
 ] 

Hadoop QA commented on YARN-5369:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 50s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 36s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 6 
new + 84 unchanged - 1 fixed = 90 total (was 85) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 46s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 39s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 25s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client |
|  |  Exception is caught when Exception is not thrown in 
org.apache.hadoop.yarn.client.cli.LogsCLI.fetchContainerLogsFromSpecificNM(ContainerLogsRequest,
 LogCLIHelpers)  At LogsCLI.java:is not thrown in 
org.apache.hadoop.yarn.client.cli.LogsCLI.fetchContainerLogsFromSpecificNM(ContainerLogsRequest,
 LogCLIHelpers)  At LogsCLI.java:[line 1201] |
| Failed junit tests | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820794/YARN-5369.3.patch |
| JIRA Issue | YARN-5369 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux f74441001a0b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Comment Edited] (YARN-5203) Return ResourceRequest JAXB object in ResourceManager Cluster Applications REST API

2016-07-28 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398386#comment-15398386
 ] 

Subru Krishnan edited comment on YARN-5203 at 7/28/16 11:13 PM:


I just committed this to trunk/branch-2.

Congrats [~ellenfkh] on your first patch :)!

Thanks to [~sunilg] for the thorough vetting and [~vvasudev] for his input.


was (Author: subru):
I just committed this to trunk/branch-2.

Congrats [~ellenfkh] on your first patch :)!

Thanks to [~sunilg] for the thorough whetting and [~vvasudev] for his input.

> Return ResourceRequest JAXB object in ResourceManager Cluster Applications 
> REST API
> ---
>
> Key: YARN-5203
> URL: https://issues.apache.org/jira/browse/YARN-5203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
>  Labels: api-breaking, bug, incompatible
> Fix For: 2.9.0
>
> Attachments: YARN-5203.v0.patch, YARN-5203.v1.patch, 
> YARN-5203.v2.patch, YARN-5203.v3.patch, YARN-5203.v4.patch, YARN-5203.v5.patch
>
>
> The ResourceManager Cluster Applications REST API returns {{ResourceRequest}} 
> as String rather than a JAXB object. This prevents downstream tools like 
> Federation Router (YARN-3659) that depend on the REST API to unmarshall the 
> {{AppInfo}}. This JIRA proposes updating {{AppInfo}} to return a JAXB version 
> of the {{ResourceRequest}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5203) Return ResourceRequest JAXB object in ResourceManager Cluster Applications REST API

2016-07-28 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398386#comment-15398386
 ] 

Subru Krishnan edited comment on YARN-5203 at 7/28/16 11:12 PM:


I just committed this to trunk/branch-2.

Congrats [~ellenfkh] on your first patch :)!

Thanks to [~sunilg] for the thorough whetting and [~vvasudev] for his input.


was (Author: subru):
I just committed this to trunk/branch-2.

Congrats [~ellenfkh] on your first patch :)!

Thanks to [~sunilg] for the thorough wetting and [~vvasudev] for his input.

> Return ResourceRequest JAXB object in ResourceManager Cluster Applications 
> REST API
> ---
>
> Key: YARN-5203
> URL: https://issues.apache.org/jira/browse/YARN-5203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
>  Labels: api-breaking, bug, incompatible
> Fix For: 2.9.0
>
> Attachments: YARN-5203.v0.patch, YARN-5203.v1.patch, 
> YARN-5203.v2.patch, YARN-5203.v3.patch, YARN-5203.v4.patch, YARN-5203.v5.patch
>
>
> The ResourceManager Cluster Applications REST API returns {{ResourceRequest}} 
> as String rather than a JAXB object. This prevents downstream tools like 
> Federation Router (YARN-3659) that depend on the REST API to unmarshall the 
> {{AppInfo}}. This JIRA proposes updating {{AppInfo}} to return a JAXB version 
> of the {{ResourceRequest}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5382) RM does not audit log kill request for active applications

2016-07-28 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5382:
-
Attachment: YARN-5382.06.patch

Uploading patch v6 (against trunk). 

- Modified the name of the new Event class to be RMAppKillByClientLogEvent
- Changed the ip address parameter to be of type InetAddress instead of String
- Updated the unit tests in TestRMAppTransitions and added more to the tests in 
TestRMAuditLogger to include these changes


> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch, 
> YARN-5382.06.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5438) TimelineClientImpl leaking FileSystem Instances causing Long running services like HiverServer2 daemon going OOM

2016-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398376#comment-15398376
 ] 

Hudson commented on YARN-5438:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10172 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10172/])
YARN-5438. TimelineClientImpl leaking FileSystem Instances causing Long (jlowe: 
rev a1890c32c52fed69ec09efad0fccf49ed8c2e21e)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/FileSystemTimelineWriter.java


> TimelineClientImpl leaking FileSystem Instances causing Long running services 
> like HiverServer2 daemon going OOM
> 
>
> Key: YARN-5438
> URL: https://issues.apache.org/jira/browse/YARN-5438
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.8.0, 2.7.3
>Reporter: Karam Singh
>Assignee: Rohith Sharma K S
> Fix For: 2.8.0
>
> Attachments: YARN-5438.0.patch
>
>
> TimelineClientImpl leaking FileSystem Instances causing Long running services 
> like HiverServer2 daemon going OOM
>  In org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl, 
> FileSystem.newInstance is invoked and is not closed. Causing over time 
> Filesystem instances getting accumulated in long runninh Client (like 
> Hiveserver2), finally causing them to OOM



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398359#comment-15398359
 ] 

Hadoop QA commented on YARN-4097:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s {color} 
| {color:red} YARN-4097 does not apply to trunk. Rebase required? Wrong Branch? 
See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820815/YARN-4097-POC.patch |
| JIRA Issue | YARN-4097 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12549/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: YARN-5355
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-POC.patch, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4097) Create POC timeline web UI with new YARN web UI framework

2016-07-28 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-4097:

Attachment: YARN-4097-POC.patch

Rebase the current POC patch with the latest YARN-3368 branch. This patch 
integrated [~varun_saxena]'s sunburst chart contributed in YARN-4097. 

> Create POC timeline web UI with new YARN web UI framework
> -
>
> Key: YARN-4097
> URL: https://issues.apache.org/jira/browse/YARN-4097
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Li Lu
>Assignee: Li Lu
>  Labels: YARN-5355
> Attachments: Screen Shot 2016-02-24 at 15.57.38.png, Screen Shot 
> 2016-02-24 at 15.57.53.png, Screen Shot 2016-02-24 at 15.58.08.png, Screen 
> Shot 2016-02-24 at 15.58.26.png, YARN-4097-POC.patch, YARN-4097-bugfix.patch
>
>
> As planned, we need to try out the new YARN web UI framework and implement 
> timeline v2 web UI on top of it. This JIRA proposes to build the basic active 
> flow and application lists of the timeline data. We can add more content 
> after we get used to this framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4280) CapacityScheduler reservations may not prevent indefinite postponement on a busy cluster

2016-07-28 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398328#comment-15398328
 ] 

Jason Lowe commented on YARN-4280:
--

Thanks for updating the patch!  The parent queue code is now significantly 
cleaner.

The new CSAssignment copy constructor is no longer used, which made me wonder 
if we missed a case.  Consider a scenario where a parent queue has at least two 
child queues. Trying to assign to one queue returns an amount to be blocked, so 
the parent limits are adjusted.  Then trying to subsequently assign to another 
child queue returns yet another blocked result, but we will end up returning 
the second child queue's blocked amount to the parent's parent, which ignores 
the amount needed by the first (and higher priority at the moment) child queue.


> CapacityScheduler reservations may not prevent indefinite postponement on a 
> busy cluster
> 
>
> Key: YARN-4280
> URL: https://issues.apache.org/jira/browse/YARN-4280
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler
>Affects Versions: 2.6.1, 2.8.0, 2.7.1
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
> Attachments: YARN-4280-branch-2.009.patch, 
> YARN-4280-branch-2.8.001.patch, YARN-4280-branch-2.8.002.patch, 
> YARN-4280-branch-2.8.003.patch, YARN-4280.001.patch, YARN-4280.002.patch, 
> YARN-4280.003.patch, YARN-4280.004.patch, YARN-4280.005.patch, 
> YARN-4280.006.patch, YARN-4280.007.patch, YARN-4280.008.patch, 
> YARN-4280.008_.patch, YARN-4280.009.patch, YARN-4280.010.patch, 
> YARN-4280.011.patch, YARN-4280.012.patch, YARN-4280.013.patch
>
>
> Consider the following scenario:
> There are 2 queues A(25% of the total capacity) and B(75%), both can run at 
> total cluster capacity. There are 2 applications, appX that runs on Queue A, 
> always asking for 1G containers(non-AM) and appY runs on Queue B asking for 2 
> GB containers.
> The user limit is high enough for the application to reach 100% of the 
> cluster resource. 
> appX is running at total cluster capacity, full with 1G containers releasing 
> only one container at a time. appY comes in with a request of 2GB container 
> but only 1 GB is free. Ideally, since appY is in the underserved queue, it 
> has higher priority and should reserve for its 2 GB request. Since this 
> request puts the alloc+reserve above total capacity of the cluster, 
> reservation is not made. appX comes in with a 1GB request and since 1GB is 
> still available, the request is allocated. 
> This can continue indefinitely causing priority inversion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5203) Return ResourceRequest JAXB object in ResourceManager Cluster Applications REST API

2016-07-28 Thread Ellen Hui (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398258#comment-15398258
 ] 

Ellen Hui commented on YARN-5203:
-

All of these unit tests pass locally.

> Return ResourceRequest JAXB object in ResourceManager Cluster Applications 
> REST API
> ---
>
> Key: YARN-5203
> URL: https://issues.apache.org/jira/browse/YARN-5203
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
>  Labels: api-breaking, bug, incompatible
> Attachments: YARN-5203.v0.patch, YARN-5203.v1.patch, 
> YARN-5203.v2.patch, YARN-5203.v3.patch, YARN-5203.v4.patch, YARN-5203.v5.patch
>
>
> The ResourceManager Cluster Applications REST API returns {{ResourceRequest}} 
> as String rather than a JAXB object. This prevents downstream tools like 
> Federation Router (YARN-3659) that depend on the REST API to unmarshall the 
> {{AppInfo}}. This JIRA proposes updating {{AppInfo}} to return a JAXB version 
> of the {{ResourceRequest}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398239#comment-15398239
 ] 

Hadoop QA commented on YARN-5113:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 57s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 10s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 39s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 40s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 6 
new + 367 unchanged - 45 fixed = 373 total (was 412) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 0 new + 159 unchanged - 4 fixed = 159 total (was 163) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 0 new + 248 unchanged - 7 fixed = 248 total (was 255) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 15s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 25s 
{color} | {color:green} 

[jira] [Updated] (YARN-5446) Find a bug when clicking hadoop icon(in the left top corner) in cluster overview page

2016-07-28 Thread Chen Ge (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Ge updated YARN-5446:
--
Attachment: screenshot-1.png

> Find a bug when clicking hadoop icon(in the left top corner) in cluster 
> overview page
> -
>
> Key: YARN-5446
> URL: https://issues.apache.org/jira/browse/YARN-5446
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Chen Ge
> Attachments: screenshot-1.png
>
>
> Under latest 3368, when user clicks hadoop icon in cluster overview page, 
> "Cluster Resource Usage By Applications" could not be correctly displayed.
> Following is error description found on website.
> {code}
> donut-chart.js:110 Uncaught TypeError: Cannot read property 'value' of 
> undefined(anonymous function) @ 
> donut-chart.js:110arguments.length.each.value.function.value.textContent @ 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5446) Find a bug when clicking hadoop icon(in the left top corner) in cluster overview page

2016-07-28 Thread Chen Ge (JIRA)
Chen Ge created YARN-5446:
-

 Summary: Find a bug when clicking hadoop icon(in the left top 
corner) in cluster overview page
 Key: YARN-5446
 URL: https://issues.apache.org/jira/browse/YARN-5446
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Chen Ge


Under latest 3368, when user clicks hadoop icon in cluster overview page, 
"Cluster Resource Usage By Applications" could not be correctly displayed.

Following is error description found on website.

{code}
donut-chart.js:110 Uncaught TypeError: Cannot read property 'value' of 
undefined(anonymous function) @ 
donut-chart.js:110arguments.length.each.value.function.value.textContent @ 
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5369) Improve Yarn logs command to get container logs based on Node Id

2016-07-28 Thread Xuan Gong (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuan Gong updated YARN-5369:

Attachment: YARN-5369.3.patch

> Improve Yarn logs command to get container logs based on Node Id
> 
>
> Key: YARN-5369
> URL: https://issues.apache.org/jira/browse/YARN-5369
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5369.1.patch, YARN-5369.2.patch, YARN-5369.3.patch
>
>
> It is helpful if we could have yarn logs --applicationId appId --nodeAddress 
> ${nodeId} to get all the container logs which ran on the specific nm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398183#comment-15398183
 ] 

Allen Wittenauer edited comment on YARN-5121 at 7/28/16 9:01 PM:
-

-06:
* ok, let's try this again.
* fixed some gcc warning as well


was (Author: aw):
-06:
* ok, let's try this again.

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, YARN-5121.06.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Attachment: YARN-5121.06.patch

-06:
* ok, let's try this again.

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, YARN-5121.06.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-07-28 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos reassigned YARN-2995:


Assignee: Konstantinos Karanasos  (was: Marco Rabozzi)

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398159#comment-15398159
 ] 

Allen Wittenauer commented on YARN-5121:


Now I'm seeing different compile errors on linux. Argh. Pulling back -05.

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Attachment: (was: YARN-5121.05.patch)

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Attachment: YARN-5121.05.patch

-05:
* fix the compile error that for some reason didn't show up on my machine ?

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch, YARN-5121.05.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398152#comment-15398152
 ] 

Hadoop QA commented on YARN-5121:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 15s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 42s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
34s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 42s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} cc {color} | {color:red} 3m 42s {color} | 
{color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 3m 42s {color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 47s {color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s 
{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.ipc.TestIPC |
| Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820770/YARN-5121.04.patch |
| JIRA Issue | YARN-5121 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  cc  |
| uname | Linux 300475205a0c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 26de4f0 |
| Default Java | 1.8.0_101 |
| compile | 
https://builds.apache.org/job/PreCommit-YARN-Build/12543/artifact/patchprocess/patch-compile-root.txt
 |
| cc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12543/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-YARN-Build/12543/artifact/patchprocess/patch-compile-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12543/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12543/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12543/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/12543/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: 

[jira] [Commented] (YARN-5445) Log aggregation configured to different namenode can fail fast

2016-07-28 Thread Chackaravarthy (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398141#comment-15398141
 ] 

Chackaravarthy commented on YARN-5445:
--

Environment : HDP-2.4 (hadoop-2.7.1)

The usecase is as follows :- 

Cluster is of size 1200 nodes and on average around 15k jobs running per day. 
Hence keeping applogs in the same cluster adds too much pressure on NN because 
of small files problem. Around 5Million files created per day (normal load) 
leading to 10Million FS objects for keeping one day logs itself. The 
requirement is to maintain atleast 1 week of log and hence decided to move it 
to different cluster or different namespace (NN federation).

In these cases, expecting minimal latency on jobs if the other cluster is 
completely down (though configured with HA). In such situation, would want to 
have minimal impact on applications running in cluster. But currently it does 
15 attempts {{dfs.client.failover.max.attempts}} to connect to NN before giving 
it up. Hence adding a latency of 2 to 2.5 mins on each container launch (per 
node manager) and hence affecting over all job completion time.

(Aware of YARN-2942 which is still in progress and MAPREDUCE-6415 is in 2.8.0)

Can we have a new config to pass it as {{dfs.client.failover.max.attempts}} 
while creating FileSystem instance in LogAggregationService so that we can 
configure it to fail fast? Or any configs already available to handle this case?

> Log aggregation configured to different namenode can fail fast
> --
>
> Key: YARN-5445
> URL: https://issues.apache.org/jira/browse/YARN-5445
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Chackaravarthy
>
> Log aggregation is enabled and configured to write applogs to different 
> cluster or different namespace (NN federation). In these cases, would like to 
> have some configs on attempts or retries to fail fast in case the other 
> cluster is completely down.
> Currently it takes default {{dfs.client.failover.max.attempts}} as 15 and 
> hence adding a latency of 2 to 2.5 mins in each container launch (per node 
> manager).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3662) Federation Membership State Store internal APIs

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398127#comment-15398127
 ] 

Hadoop QA commented on YARN-3662:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 39s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
15s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 28s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
42s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 9 
new + 26 unchanged - 2 fixed = 35 total (was 28) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 19s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 17s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820778/YARN-3662-YARN-2915-v6.patch
 |
| JIRA Issue | YARN-3662 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  cc  |
| uname | Linux acbe8fef2af4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / 85eda58 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12544/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results 

[jira] [Commented] (YARN-5327) API changes required to support recurring reservations in the YARN ReservationSystem

2016-07-28 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398122#comment-15398122
 ] 

Subru Krishnan commented on YARN-5327:
--

Thanks [~ajsangeetha] for the patch. It mostly looks good, a few minor comments:
  * We need to add more meat to the Javadocs for 
{{ReservationDefinition::get/setPeriodicity}} like how we distinguish 
periodic/non-periodic, implicit priority in the event of replanning, 
consistency of allocation, good till cancelled aspect etc.
  * Can you add an explicit default value for *period* in {{yarn_protos.proto}}.
  * Maybe it makes sense to add *period* to 
{{ReservationDefinitionPBImpl::toString}}.

> API changes required to support recurring reservations in the YARN 
> ReservationSystem
> 
>
> Key: YARN-5327
> URL: https://issues.apache.org/jira/browse/YARN-5327
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sangeetha Abdu Jyothi
> Attachments: YARN-5327.001.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes needed 
> in ApplicationClientProtocol to accomplish it. Please refer to the design doc 
> in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5366) Add support for toggling the removal of completed and failed docker containers

2016-07-28 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398121#comment-15398121
 ] 

Shane Kumpf commented on YARN-5366:
---

After doing additional testing here, adding --rm to docker run will not work. 
Detached mode combined with cleanup container on exit options are not supported 
together. As a result, we'll need a bit more control over the lifecycle.

The high level lifecycle is:

received SIGKILL, SIGQUIT, SIGTERM?
  does container exist?
   is the container running?
  docker stop
!keep exited container?
    docker rm

To achieve this, we'll need to introduce docker rm and docker inspect. The 
docker rm logic will also be removed from container executor and handled by 
signalContainer.

I'll get a patch up soon that implements the above.

> Add support for toggling the removal of completed and failed docker containers
> --
>
> Key: YARN-5366
> URL: https://issues.apache.org/jira/browse/YARN-5366
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>
> Currently, completed and failed docker containers are removed by 
> container-executor. Add a job level environment variable to 
> DockerLinuxContainerRuntime to allow the user to toggle whether they want the 
> container deleted or not and remove the logic from container-executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5445) Log aggregation configured to different namenode can fail fast

2016-07-28 Thread Chackaravarthy (JIRA)
Chackaravarthy created YARN-5445:


 Summary: Log aggregation configured to different namenode can fail 
fast
 Key: YARN-5445
 URL: https://issues.apache.org/jira/browse/YARN-5445
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Chackaravarthy


Log aggregation is enabled and configured to write applogs to different cluster 
or different namespace (NN federation). In these cases, would like to have some 
configs on attempts or retries to fail fast in case the other cluster is 
completely down.

Currently it takes default {{dfs.client.failover.max.attempts}} as 15 and hence 
adding a latency of 2 to 2.5 mins in each container launch (per node manager).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5360) Decouple host user and Docker container user

2016-07-28 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398111#comment-15398111
 ] 

Sidharta Seethana commented on YARN-5360:
-

[~tangzhankun], dropping the '--user' works in this because you are running the 
container processes as root - once the processes are run root, the restrictive 
permissions on launch_container.sh don't matter. An image that is setup to use 
a different arbitrary user will not work. To reiterate, I don't believe it 
makes sense to have different default behavior for secure/non-secure modes - we 
should find a model that works across both. 

> Decouple host user and Docker container user
> 
>
> Key: YARN-5360
> URL: https://issues.apache.org/jira/browse/YARN-5360
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
>
> There is *a dependency between job submitting user and the user in the Docker 
> image* in LCE currently. For instance, in order to run the Docker container 
> as yarn user, we can choose set the 
> "yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user" to yarn 
> and leave 
> "yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users" 
> default (true). Then LCE will choose yarn ( UID maybe 1001) as the user 
> running jobs.
> LCE will mount the generated launch_container.sh (owned by the running job 
> user) and /etc/passwd (*current the code is mounting to container's 
> /etc/password, I think it's a mistake*) into the Docker container and 
> utilizes "docker run --user=" option to get it done internally.
> Mounting /etc/passwd to the container is a not good choice due to override 
> original users defined in Docker image. As far as I know, since Docker v1.8 
> (or maybe earlier), the Docker run command "--user=" option accepts UID and 
> *when passing UID, the user does not have to exist in the container*. So we 
> could use UID instead of user name to construct the Docker run command to 
> eliminate the dependency that create the same user in the Docker image. This 
> enables LCE the ability to launch any Docker container safely regardless what 
> users in it.
> But this is not enough to decouple host user and Docker container user. The 
> final solution we are searching for are focused on allowing users to run 
> their Docker images flexibly without involving dependencies of YARN and make 
> sure the container won't bring in security risk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5435) User AM resource limit for does not get updated for ReservationQueue after execution of plan-follower.

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398108#comment-15398108
 ] 

Hadoop QA commented on YARN-5435:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 7 unchanged - 0 fixed = 8 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 12s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.reservation.TestCapacitySchedulerPlanFollower
 |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerDynamicBehavior
 |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
|   | hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820765/YARN-5435.v1.patch |
| JIRA Issue | YARN-5435 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 177e9ff5b2ca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 26de4f0 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12542/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12542/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  

[jira] [Updated] (YARN-3662) Federation Membership State Store internal APIs

2016-07-28 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3662:
-
Attachment: YARN-3662-YARN-2915-v6.patch

Thanks [~vinodkv] for reviewing the patch. PFA v6 which addresses your feedback.

I had to leave the SC_ prefix in the enum as otherwise I get the following 
error during protobuf compilation:
bq. yarn_server_federation_protos.proto:33:3: "hadoop.yarn.NEW" is already 
defined in file "yarn_protos.proto".

I recall encountering this during YARN-1051 and also {{NodeStateProto}} uses a 
NS_ prefix.
 

> Federation Membership State Store internal APIs
> ---
>
> Key: YARN-3662
> URL: https://issues.apache.org/jira/browse/YARN-3662
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-3662-YARN-2915-v1.1.patch, 
> YARN-3662-YARN-2915-v1.patch, YARN-3662-YARN-2915-v2.patch, 
> YARN-3662-YARN-2915-v3.01.patch, YARN-3662-YARN-2915-v3.patch, 
> YARN-3662-YARN-2915-v4.patch, YARN-3662-YARN-2915-v5.patch, 
> YARN-3662-YARN-2915-v6.patch
>
>
> The Federation Application State encapsulates the information about the 
> active RM of each sub-cluster that is participating in Federation. The 
> information includes addresses for ClientRM, ApplicationMaster and Admin 
> services along with the sub_cluster _capability_ which is currently defined 
> by *ClusterMetricsInfo*. Please refer to the design doc in parent JIRA for 
> further details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5113) Refactoring and other clean-up for distributed scheduling

2016-07-28 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-5113:
-
Attachment: YARN-5113.008.patch

Fixed one remaining checkstyle issue.
The failing test cases are unrelated and have been fixed.
There is only one still failing (at least locally), namely 
{{TestDirectoryCollection::testCreateDirectories}}, which is also failing in 
trunk.

> Refactoring and other clean-up for distributed scheduling
> -
>
> Key: YARN-5113
> URL: https://issues.apache.org/jira/browse/YARN-5113
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Arun Suresh
>Assignee: Konstantinos Karanasos
> Attachments: YARN-5113.001.patch, YARN-5113.002.patch, 
> YARN-5113.003.patch, YARN-5113.004.patch, YARN-5113.005.patch, 
> YARN-5113.006.patch, YARN-5113.007.patch, YARN-5113.008.patch
>
>
> This JIRA focuses on the refactoring of classes related to Distributed 
> Scheduling



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-07-28 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398080#comment-15398080
 ] 

Naganarasimha G R commented on YARN-4624:
-

Hi [~sunilg] & [~wangda] ,
   I feel NPE issue needs to be fixed before 2.8 and for other 
improvements(/fixes) we can tackle in other jira. 
I am fine with any approach which is taken as per v1 patch  or v3 patch, May be 
[~wangda] can share his opinion so that we can finalize on the approach and 
commit it.


> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Attachment: YARN-5121.04.patch

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Attachment: (was: YARN-5121.04.patch)

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Attachment: (was: YARN-5121.01.patch)

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Attachment: YARN-5121.01.patch

-01:
* rebase
* avoid argv[0] and use OS specific facilities where available/known
* clean up some  left over debug bits

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15398030#comment-15398030
 ] 

Allen Wittenauer edited comment on YARN-5121 at 7/28/16 7:03 PM:
-

-04:
* rebase
* avoid argv[0] and use OS specific facilities where available/known
* clean up some  left over debug bits


was (Author: aw):
-01:
* rebase
* avoid argv[0] and use OS specific facilities where available/known
* clean up some  left over debug bits

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5121) fix some container-executor portability issues

2016-07-28 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated YARN-5121:
---
Attachment: YARN-5121.04.patch

> fix some container-executor portability issues
> --
>
> Key: YARN-5121
> URL: https://issues.apache.org/jira/browse/YARN-5121
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: YARN-5121.00.patch, YARN-5121.01.patch, 
> YARN-5121.02.patch, YARN-5121.03.patch, YARN-5121.04.patch
>
>
> container-executor has some issues that are preventing it from even compiling 
> on the OS X jenkins instance.  Let's fix those.  While we're there, let's 
> also try to take care of some of the other portability problems that have 
> crept in over the years, since it used to work great on Solaris but now 
> doesn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5435) User AM resource limit for does not get updated for ReservationQueue after execution of plan-follower.

2016-07-28 Thread Sean Po (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Po updated YARN-5435:
--
Attachment: YARN-5435.v1.patch

>  User AM resource limit for does not get updated for ReservationQueue after 
> execution of plan-follower.
> ---
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
> Attachments: YARN-5435.v1.patch
>
>
> After a reservation queue is allocated, and the plan is synchronized by the 
> plan follower, we expect the user am resource limit to reflect this change if 
> the appropriate configuration is set.
> Instead, the user am resource limit is always the same. As a result, multiple 
> AM cannot run in parallel for small reservations.
> To reproduce this issue, create a reservation, and submit multiple 
> applications against the new reservation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5444) TestLinuxContainerExecutorWithMocks failed due to unstable assumption.

2016-07-28 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397973#comment-15397973
 ] 

Yufei Gu commented on YARN-5444:


The failed test is unrelated. 

> TestLinuxContainerExecutorWithMocks failed due to unstable assumption.
> --
>
> Key: YARN-5444
> URL: https://issues.apache.org/jira/browse/YARN-5444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5444.001.patch
>
>
> Test case {{testLaunchCommandWithoutPriority}} and {{testStartLocalizer}} are 
> based on the assumption that Yarn configuration files won't be loaded, which 
> is not true in some situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5444) TestLinuxContainerExecutorWithMocks failed due to unstable assumption.

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397963#comment-15397963
 ] 

Hadoop QA commented on YARN-5444:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 0 new + 30 unchanged - 2 fixed = 30 total (was 32) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 1s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 41s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820748/YARN-5444.001.patch |
| JIRA Issue | YARN-5444 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9405175695e4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 26de4f0 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12541/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12541/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12541/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12541/console |
| Powered by | 

[jira] [Commented] (YARN-5137) Make DiskChecker pluggable in NodeManager

2016-07-28 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397906#comment-15397906
 ] 

Yufei Gu commented on YARN-5137:


Filed YARN-5444 for fix in LinuxContainerExecutor and 
TestLinuxContainerExecutorWithMocks, and this jira will depend on it. 

> Make DiskChecker pluggable in NodeManager
> -
>
> Key: YARN-5137
> URL: https://issues.apache.org/jira/browse/YARN-5137
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Ray Chiang
>Assignee: Yufei Gu
>  Labels: supportability
> Attachments: YARN-5137.001.patch, YARN-5137.002.patch, 
> YARN-5137.003.patch, YARN-5137.004.patch, YARN-5137.005.patch
>
>
> It would be nice to have the option for a DiskChecker that has more 
> sophisticated checking capabilities.  In order to do this, we would first 
> need DiskChecker to be pluggable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5444) TestLinuxContainerExecutorWithMocks failed due to unstable assumption.

2016-07-28 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5444:
---
Attachment: YARN-5444.001.patch

> TestLinuxContainerExecutorWithMocks failed due to unstable assumption.
> --
>
> Key: YARN-5444
> URL: https://issues.apache.org/jira/browse/YARN-5444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: YARN-5444.001.patch
>
>
> Test case {{testLaunchCommandWithoutPriority}} and {{testStartLocalizer}} are 
> based on the assumption that Yarn configuration files won't be loaded, which 
> is not true in some situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397896#comment-15397896
 ] 

Hadoop QA commented on YARN-5375:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 17s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 38s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn: The patch generated 1 
new + 270 unchanged - 3 fixed = 271 total (was 273) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 32s 
{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 36m 29s 
{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 14s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820730/YARN-5375.05.patch |
| JIRA Issue | YARN-5375 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9e46bec91aed 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7f3c306 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12539/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12539/testReport/ |
| modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: hadoop-yarn-project/hadoop-yarn |
| Console output | 

[jira] [Updated] (YARN-5444) TestLinuxContainerExecutorWithMocks failed due to unstable assumption.

2016-07-28 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5444:
---
Description: Test case {{testLaunchCommandWithoutPriority}} and 
{{testStartLocalizer}} are based on the assumption that Yarn configuration 
files won't be loaded, which is not true in some situations.   (was: Test case 
{{testLaunchCommandWithoutPriority}} and {{testStartLocalizer}} is based on the 
assumption that Yarn configuration files won't be loaded, which is not true in 
some situations. )

> TestLinuxContainerExecutorWithMocks failed due to unstable assumption.
> --
>
> Key: YARN-5444
> URL: https://issues.apache.org/jira/browse/YARN-5444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Test case {{testLaunchCommandWithoutPriority}} and {{testStartLocalizer}} are 
> based on the assumption that Yarn configuration files won't be loaded, which 
> is not true in some situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5444) TestLinuxContainerExecutorWithMocks failed due to unstable assumption.

2016-07-28 Thread Yufei Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yufei Gu updated YARN-5444:
---
Description: Test case {{testLaunchCommandWithoutPriority}} and 
{{testStartLocalizer}} is based on the assumption that Yarn configuration files 
won't be loaded, which is not true in some situations. 

> TestLinuxContainerExecutorWithMocks failed due to unstable assumption.
> --
>
> Key: YARN-5444
> URL: https://issues.apache.org/jira/browse/YARN-5444
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>
> Test case {{testLaunchCommandWithoutPriority}} and {{testStartLocalizer}} is 
> based on the assumption that Yarn configuration files won't be loaded, which 
> is not true in some situations. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5440) Use AHSClient in YarnClient when TimelineServer is running

2016-07-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397877#comment-15397877
 ] 

Hudson commented on YARN-5440:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10169 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10169/])
YARN-5440. Use AHSClient in YarnClient when TimelineServer is running. 
(gtcarrera9: rev 26de4f0de789f58736d1dc383125cffb54debdd0)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/api/impl/YarnClientImpl.java


> Use AHSClient in YarnClient when TimelineServer is running
> --
>
> Key: YARN-5440
> URL: https://issues.apache.org/jira/browse/YARN-5440
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5440.1.patch
>
>
> In YarnClient, depends on whether we enable 
> yarn.timeline-service.generic-application-history.enabled, the AHSClient will 
> be used or not. But the AHSClientService is enabled by default when we start 
> the TimelineServer which means we are able to get history information for 
> applications/applicationAttempts/containers by using ahsClient when the 
> TimelineServer is running. So, we do not have to rely on this deprecated 
> configuration to get history information by using YarnClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5443) Add support for docker inspect

2016-07-28 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397874#comment-15397874
 ] 

Shane Kumpf commented on YARN-5443:
---

The checkstyle failures are due to the package names exceeding 80 characters 
and the unit test failure is unrelated to this patch. I believe this is ready 
for review.

> Add support for docker inspect
> --
>
> Key: YARN-5443
> URL: https://issues.apache.org/jira/browse/YARN-5443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5443.001.patch
>
>
> Similar to the DockerStopCommand and DockerRunCommand, it would be desirable 
> to have a DockerInspectCommand. The initial use is for retrieving a 
> containers status, but many other uses are possible (IP information, volume 
> information, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5444) TestLinuxContainerExecutorWithMocks failed due to unstable assumption.

2016-07-28 Thread Yufei Gu (JIRA)
Yufei Gu created YARN-5444:
--

 Summary: TestLinuxContainerExecutorWithMocks failed due to 
unstable assumption.
 Key: YARN-5444
 URL: https://issues.apache.org/jira/browse/YARN-5444
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Yufei Gu
Assignee: Yufei Gu






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5443) Add support for docker inspect

2016-07-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397872#comment-15397872
 ] 

Hadoop QA commented on YARN-5443:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 13s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 13m 7s {color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 6s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12820732/YARN-5443.001.patch |
| JIRA Issue | YARN-5443 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 150417fa5dc9 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 7f3c306 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12540/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12540/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-YARN-Build/12540/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12540/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 

[jira] [Commented] (YARN-5436) Race in AsyncDispatcher can cause random test failures in Tez(probably YARN also )

2016-07-28 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397858#comment-15397858
 ] 

Rohith Sharma K S commented on YARN-5436:
-

Thanks for the clarification, especially *race can still happen without 
invoking dispatcher.serviceStop()* is main reason for test failures. This would 
solve YARN test failures also. 

+1 LGTM

> Race in AsyncDispatcher can cause random test failures in Tez(probably YARN 
> also )
> --
>
> Key: YARN-5436
> URL: https://issues.apache.org/jira/browse/YARN-5436
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Zhiyuan Yang
>Assignee: Zhiyuan Yang
> Attachments: YARN-5436.1.patch, YARN-5436.2.patch, YARN-5436.3.patch, 
> YARN-5436.4.patch
>
>
> In YARN-2264, a race in DrainDispatcher was fixed. Unfortunately, it also 
> exists in AsyncDispatcher (this was found and ignored in YARN-3878 but never 
> documented...). In YARN-2991, another DrainDispatcher bug was fixed by 
> letting DrainDispatcher reuse some AsyncDispatcher method because 
> AsyncDispatcher doesn't have such issue. However, this shadows YARN-2264, and 
> now similar race reappears in Tez unit tests (probably also YARN unit tests 
> also).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5440) Use AHSClient in YarnClient when TimelineServer is running

2016-07-28 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397849#comment-15397849
 ] 

Li Lu commented on YARN-5440:
-

Will commit shortly. 

> Use AHSClient in YarnClient when TimelineServer is running
> --
>
> Key: YARN-5440
> URL: https://issues.apache.org/jira/browse/YARN-5440
> Project: Hadoop YARN
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5440.1.patch
>
>
> In YarnClient, depends on whether we enable 
> yarn.timeline-service.generic-application-history.enabled, the AHSClient will 
> be used or not. But the AHSClientService is enabled by default when we start 
> the TimelineServer which means we are able to get history information for 
> applications/applicationAttempts/containers by using ahsClient when the 
> TimelineServer is running. So, we do not have to rely on this deprecated 
> configuration to get history information by using YarnClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5443) Add support for docker inspect

2016-07-28 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf updated YARN-5443:
--
Attachment: YARN-5443.001.patch

> Add support for docker inspect
> --
>
> Key: YARN-5443
> URL: https://issues.apache.org/jira/browse/YARN-5443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5443.001.patch
>
>
> Similar to the DockerStopCommand and DockerRunCommand, it would be desirable 
> to have a DockerInspectCommand. The initial use is for retrieving a 
> containers status, but many other uses are possible (IP information, volume 
> information, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5443) Add support for docker inspect

2016-07-28 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397816#comment-15397816
 ] 

Shane Kumpf commented on YARN-5443:
---

Attaching a patch that adds the docker inspect command. Note that docker 
inspect needs the filter before the container name, so the constructor differs 
slightly from docker run and docker stop.

> Add support for docker inspect
> --
>
> Key: YARN-5443
> URL: https://issues.apache.org/jira/browse/YARN-5443
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>
> Similar to the DockerStopCommand and DockerRunCommand, it would be desirable 
> to have a DockerInspectCommand. The initial use is for retrieving a 
> containers status, but many other uses are possible (IP information, volume 
> information, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-07-28 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15397800#comment-15397800
 ] 

sandflee commented on YARN-5375:


update the patch to fix test failures.
bq. 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler
scheduler event handle exceptions cause test failed, disable 
Dispatcher.DISPATCHER_EXIT_ON_ERROR_KEY in drainDispatcher
bq. org.apache.hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesNodes
MockRM is not started, so waitForState will be blocked, just start the rm.


> invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures
> --
>
> Key: YARN-5375
> URL: https://issues.apache.org/jira/browse/YARN-5375
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-5375.01.patch, YARN-5375.03.patch, 
> YARN-5375.04.patch, YARN-5375.05.patch
>
>
> seen many test failures related to RMApp/RMAppattempt comes to some state but 
> some event are not processed in rm event queue or scheduler event queue, 
> cause test failure, seems we could implicitly invokes drainEvents(should also 
> drain sheduler event) in some mockRM method like waitForState



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5375) invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures

2016-07-28 Thread sandflee (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

sandflee updated YARN-5375:
---
Attachment: YARN-5375.05.patch

> invoke MockRM#drainEvents implicitly in MockRM methods to reduce test failures
> --
>
> Key: YARN-5375
> URL: https://issues.apache.org/jira/browse/YARN-5375
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: sandflee
>Assignee: sandflee
> Attachments: YARN-5375.01.patch, YARN-5375.03.patch, 
> YARN-5375.04.patch, YARN-5375.05.patch
>
>
> seen many test failures related to RMApp/RMAppattempt comes to some state but 
> some event are not processed in rm event queue or scheduler event queue, 
> cause test failure, seems we could implicitly invokes drainEvents(should also 
> drain sheduler event) in some mockRM method like waitForState



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >