[jira] [Commented] (YARN-5697) Use CliParser to parse options in RMAdminCLI

2016-11-02 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15627986#comment-15627986
 ] 

Naganarasimha G R commented on YARN-5697:
-

Hi [~Tao Jie],
can you please take a look at the test case failures?

> Use CliParser to parse options in RMAdminCLI
> 
>
> Key: YARN-5697
> URL: https://issues.apache.org/jira/browse/YARN-5697
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
> Attachments: YARN-5697.001.patch, YARN-5697.002.patch, 
> YARN-5697.003.patch, YARN-5697.004.patch, YARN-5697.005-branch-2.8.patch, 
> YARN-5697.005.patch
>
>
> As discussed in YARN-4855, it is better to use CliParser rather than args to 
> parse command line options in RMAdminCli.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2009) CapacityScheduler: Add intra-queue preemption for app priority support

2016-11-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628030#comment-15628030
 ] 

Sunil G commented on YARN-2009:
---

Thank you [~jianhe]
We also have a similar code in the existing Inter-Queue preemption code as well 
{{FifoCandidatesSelector.selectCandidates}}. I think we can change both to a 
read-only lock. I think we can handle that in another ticket since its 
affecting existing preemption as well. Thoughts?



> CapacityScheduler: Add intra-queue preemption for app priority support
> --
>
> Key: YARN-2009
> URL: https://issues.apache.org/jira/browse/YARN-2009
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Devaraj K
>Assignee: Sunil G
>  Labels: oct16-medium
> Fix For: 2.9.0
>
> Attachments: YARN-2009.0001.patch, YARN-2009.0002.patch, 
> YARN-2009.0003.patch, YARN-2009.0004.patch, YARN-2009.0005.patch, 
> YARN-2009.0006.patch, YARN-2009.0007.patch, YARN-2009.0008.patch, 
> YARN-2009.0009.patch, YARN-2009.0010.patch, YARN-2009.0011.patch, 
> YARN-2009.0012.patch, YARN-2009.0013.patch, YARN-2009.0014.patch, 
> YARN-2009.0015.patch, YARN-2009.0016.patch
>
>
> While preempting containers based on the queue ideal assignment, we may need 
> to consider preempting the low priority application containers first.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4862) Handle duplicate completed containers in RMNodeImpl

2016-11-02 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-4862:

Attachment: YARN-4862-006.patch

bq. I'm also curious why a sleep was added instead of something like a 
drainEvents call.
I used draiEvents in attached patch.

I retained sending 2 container status in node heart beat to verification. There 
are 2 scenarios can occur in reality when NM reports container status to RM. 
# Application belongs to container-status are tracked by RM. Here, RMNodeImpl 
triggers an event to scheduler as completed containers. 
# Application belongs to container-status are *NOT* tracked by RM. Here, 
RMNodeImpl trigger event to scheduler with only one container as completed. 
Rest all containers belong to this application will be skipped. 

In earlier patches, test case was sending container status with 2nd scenario. 
But in latest patch, I have modified test code for 1st scenario. 

I would think still we can optimize such that if application is not tracked by 
RM then RMNodeImpl need not to report even one completed container to scheduler 
at all. May be I am open to handle it in this JIRA or create new JIRA  optimize 
this scenario. 

> Handle duplicate completed containers in RMNodeImpl
> ---
>
> Key: YARN-4862
> URL: https://issues.apache.org/jira/browse/YARN-4862
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-4862.patch, 0002-YARN-4862.patch, 
> 0003-YARN-4862.patch, YARN-4862-004.patch, YARN-4862-005.patch, 
> YARN-4862-006.patch
>
>
> As per 
> [comment|https://issues.apache.org/jira/browse/YARN-4852?focusedCommentId=15209689&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15209689]
>  from [~sharadag], there should be safe guard for duplicated container status 
> in RMNodeImpl before creating UpdatedContainerInfo. 
> Or else in heavily loaded cluster where event processing is gradually slow, 
> if any duplicated container are sent to RM(may be bug in NM also), there is 
> significant impact that RMNodImpl always create UpdatedContainerInfo for 
> duplicated containers. This result in increase in the heap memory and causes 
> problem like YARN-4852.
> This is an optimization for issue kind YARN-4852



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628073#comment-15628073
 ] 

Sunil G commented on YARN-5802:
---

Thanks [~bibinchundatt]. One last pass over the  patch. Few minor nits
TestClientRMService.java:
1. testUpdatePriorityAndKillWithEmptyResource -> 
testUpdatePriorityAndKillAppWithZeroNodes (or ZeroClusterResource)

TestApplicationPriority.java:
2. testUpdatePriorityAndRemoveAttempt -> 
testUpdatePriorityOnPendingAppAndKillAttempt
3. This code {{CSQueue defaultQueue = findQueue(rootQueue, "root.default”);}} 
could be replaced with {{LeafQueue q = (LeafQueue) cs.getQueue("default");}} 
given that the scheduler is CS. If so we could remove {{findQueue}}
4. In {{killAppAndVerifyOrderingPolicy}}, *appsPending* and *activeApps* are 
calculated before calling {{updateApplicationPriority}}. Pls retrieve these 
values after update and kill event. 
5. In {{killAppAndVerifyOrderingPolicy}},  *activeApps* is directly getting an 
object of *schedulableEntities*. pls use *getApplications* from LeafQueue like 
pending.

> Application priority updates add pending apps to running ordering policy
> 
>
> Key: YARN-5802
> URL: https://issues.apache.org/jira/browse/YARN-5802
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5802.0001.patch, YARN-5802.0002.patch, 
> YARN-5802.0003.patch, YARN-5802.0004.patch, YARN-5802.0005.patch
>
>
> {{LeafQueue#updateApplicationPriority}}
> {code}
>  getOrderingPolicy().removeSchedulableEntity(attempt);
>   // Update new priority in SchedulerApplication
>   attempt.setPriority(newAppPriority);
>   getOrderingPolicy().addSchedulableEntity(attempt);
> {code}
> We should add again to ordering policy only when  attempt available in first 
> case.Else during application attempt removal will try to iterate on killed 
> application still available in pending Ordering policy.Which can cause RM to 
> crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3491) PublicLocalizer#addResource is too slow.

2016-11-02 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628106#comment-15628106
 ] 

Brahma Reddy Battula commented on YARN-3491:


I feel, this should go in branch-2.7 as well..?

> PublicLocalizer#addResource is too slow.
> 
>
> Key: YARN-3491
> URL: https://issues.apache.org/jira/browse/YARN-3491
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 2.7.0
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Critical
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: YARN-3491.000.patch, YARN-3491.001.patch, 
> YARN-3491.002.patch, YARN-3491.003.patch, YARN-3491.004.patch
>
>
> Based on the profiling, The bottleneck in PublicLocalizer#addResource is 
> getInitializedLocalDirs. getInitializedLocalDirs call checkLocalDir.
> checkLocalDir is very slow which takes about 10+ ms.
> The total delay will be approximately number of local dirs * 10+ ms.
> This delay will be added for each public resource localization.
> Because PublicLocalizer#addResource is slow, the thread pool can't be fully 
> utilized. Instead of doing public resource localization in 
> parallel(multithreading), public resource localization is serialized most of 
> the time.
> And also PublicLocalizer#addResource is running in Dispatcher thread, 
> So the Dispatcher thread will be blocked by PublicLocalizer#addResource for 
> long time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5697) Use CliParser to parse options in RMAdminCLI

2016-11-02 Thread Tao Jie (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628123#comment-15628123
 ] 

Tao Jie commented on YARN-5697:
---

Hi [~Naganarasimha],
I checked test log and it seems that test case failures are due to test 
environment:
{quote}
testNonExistentUser(org.apache.hadoop.yarn.client.TestGetGroups)  Time elapsed: 
0.004 sec  <<< ERROR!
java.net.UnknownHostException: Invalid host name: local host is: (unknown); 
destination host is: "7ed7e992eec3":8033; java.net.UnknownHostException; For 
more details see:  http://wiki.apache.org/hadoop/UnknownHost
{quote}
Also I run failed test cases on my local environment, all of them are fine.

> Use CliParser to parse options in RMAdminCLI
> 
>
> Key: YARN-5697
> URL: https://issues.apache.org/jira/browse/YARN-5697
> Project: Hadoop YARN
>  Issue Type: Improvement
>Affects Versions: 2.8.0
>Reporter: Tao Jie
>Assignee: Tao Jie
> Attachments: YARN-5697.001.patch, YARN-5697.002.patch, 
> YARN-5697.003.patch, YARN-5697.004.patch, YARN-5697.005-branch-2.8.patch, 
> YARN-5697.005.patch
>
>
> As discussed in YARN-4855, it is better to use CliParser rather than args to 
> parse command line options in RMAdminCli.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5552) Add Builder methods for common yarn API records

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628174#comment-15628174
 ] 

Hadoop QA commented on YARN-5552:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-project/hadoop-yarn: The patch generated 
0 new + 110 unchanged - 8 fixed = 110 total (was 118) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
54s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
17s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 56 new + 123 unchanged - 0 fixed = 179 total (was 123) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
16s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 
24s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m  
3s{color} | {color:green} hadoop-yarn-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5552 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836513/YARN-5552.008.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 16a865124b74 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cb5c

[jira] [Commented] (YARN-4862) Handle duplicate completed containers in RMNodeImpl

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628183#comment-15628183
 ] 

Hadoop QA commented on YARN-4862:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 200 unchanged - 1 fixed = 201 total (was 201) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 53s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 19s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.resourcemanager.TestRMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-4862 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836518/YARN-4862-006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux df2fae1822c5 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cb5cc0d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13749/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13749/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13749/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Bui

[jira] [Updated] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5802:
---
Attachment: YARN-5802.0006.patch

> Application priority updates add pending apps to running ordering policy
> 
>
> Key: YARN-5802
> URL: https://issues.apache.org/jira/browse/YARN-5802
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5802.0001.patch, YARN-5802.0002.patch, 
> YARN-5802.0003.patch, YARN-5802.0004.patch, YARN-5802.0005.patch, 
> YARN-5802.0006.patch
>
>
> {{LeafQueue#updateApplicationPriority}}
> {code}
>  getOrderingPolicy().removeSchedulableEntity(attempt);
>   // Update new priority in SchedulerApplication
>   attempt.setPriority(newAppPriority);
>   getOrderingPolicy().addSchedulableEntity(attempt);
> {code}
> We should add again to ordering policy only when  attempt available in first 
> case.Else during application attempt removal will try to iterate on killed 
> application still available in pending Ordering policy.Which can cause RM to 
> crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628578#comment-15628578
 ] 

Bibin A Chundatt commented on YARN-5802:


Thank you [~sunilg] for review comment.
Update patch handling all review comments..

> Application priority updates add pending apps to running ordering policy
> 
>
> Key: YARN-5802
> URL: https://issues.apache.org/jira/browse/YARN-5802
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5802.0001.patch, YARN-5802.0002.patch, 
> YARN-5802.0003.patch, YARN-5802.0004.patch, YARN-5802.0005.patch, 
> YARN-5802.0006.patch
>
>
> {{LeafQueue#updateApplicationPriority}}
> {code}
>  getOrderingPolicy().removeSchedulableEntity(attempt);
>   // Update new priority in SchedulerApplication
>   attempt.setPriority(newAppPriority);
>   getOrderingPolicy().addSchedulableEntity(attempt);
> {code}
> We should add again to ordering policy only when  attempt available in first 
> case.Else during application attempt removal will try to iterate on killed 
> application still available in pending Ordering policy.Which can cause RM to 
> crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5276) print more info when event queue is blocked

2016-11-02 Thread sandflee (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628654#comment-15628654
 ] 

sandflee commented on YARN-5276:


Thanks [~miklos.szeg...@cloudera.com] for your detailed reply, seems no much  
necessary to add a UT :(

> print more info when event queue is blocked
> ---
>
> Key: YARN-5276
> URL: https://issues.apache.org/jira/browse/YARN-5276
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager, resourcemanager
>Reporter: sandflee
>Assignee: sandflee
>  Labels: oct16-easy
> Attachments: YARN-5276.01.patch, YARN-5276.02.patch, 
> YARN-5276.03.patch, YARN-5276.04.patch
>
>
> we now see logs like "Size of event-queue is 498000, Size of event-queue is 
> 499000" and difficult to know which event flood the queue. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628725#comment-15628725
 ] 

Hadoop QA commented on YARN-5802:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 19s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 125 unchanged - 0 fixed = 126 total (was 125) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m  0s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 52m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5802 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836537/YARN-5802.0006.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 85f9c2b378b3 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cb5cc0d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13750/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13750/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13750/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.ap

[jira] [Updated] (YARN-5811) ConfigurationProvider must implement Closeable interface

2016-11-02 Thread Denis Bolshakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denis Bolshakov updated YARN-5811:
--
  Labels: newbie  (was: )
Priority: Minor  (was: Major)

> ConfigurationProvider must implement Closeable interface
> 
>
> Key: YARN-5811
> URL: https://issues.apache.org/jira/browse/YARN-5811
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Denis Bolshakov
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-5811.1.patch, YARN-5811.3.patch, YARN-5811.5.patch, 
> YARN-5811.6.patch
>
>
> ConfigurationProvider declares close method, it would be so nice if the class 
> implements Closeable interface allowing to use `try with resources`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628786#comment-15628786
 ] 

Bibin A Chundatt commented on YARN-5802:


Test case failure looks random due to Nodemanager event not yet received to 
scheduler after registration. Not related to patch attached.


> Application priority updates add pending apps to running ordering policy
> 
>
> Key: YARN-5802
> URL: https://issues.apache.org/jira/browse/YARN-5802
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5802.0001.patch, YARN-5802.0002.patch, 
> YARN-5802.0003.patch, YARN-5802.0004.patch, YARN-5802.0005.patch, 
> YARN-5802.0006.patch
>
>
> {{LeafQueue#updateApplicationPriority}}
> {code}
>  getOrderingPolicy().removeSchedulableEntity(attempt);
>   // Update new priority in SchedulerApplication
>   attempt.setPriority(newAppPriority);
>   getOrderingPolicy().addSchedulableEntity(attempt);
> {code}
> We should add again to ordering policy only when  attempt available in first 
> case.Else during application attempt removal will try to iterate on killed 
> application still available in pending Ordering policy.Which can cause RM to 
> crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2882) Add an OPPORTUNISTIC ExecutionType

2016-11-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628792#comment-15628792
 ] 

Steve Loughran commented on YARN-2882:
--

I think I've already expressed my unhappiness about breaking tagged-as-stable 
stable classes/APIs and how "it wasn't meant to be subclassed for mock testing" 
isn't the kind of response I'd like to have seen.

+1 for the patch, also +1 for making this policy "if we add new methods to a 
public class, we'll make them non-abstract but fail as unsupported for the 
benefit of those subclasses (especially mock test ones) which may exist.

> Add an OPPORTUNISTIC ExecutionType
> --
>
> Key: YARN-2882
> URL: https://issues.apache.org/jira/browse/YARN-2882
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager
>Reporter: Konstantinos Karanasos
>Assignee: Konstantinos Karanasos
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: YARN-2882-yarn-2877.001.patch, 
> YARN-2882-yarn-2877.002.patch, YARN-2882-yarn-2877.003.patch, 
> YARN-2882-yarn-2877.004.patch, YARN-2882.005.patch, yarn-2882.patch
>
>
> This JIRA introduces the notion of container types.
> We propose two initial types of containers: guaranteed-start and queueable 
> containers.
> Guaranteed-start are the existing containers, which are allocated by the 
> central RM and are instantaneously started, once allocated.
> Queueable is a new type of container, which allows containers to be queued in 
> the NM, thus their execution may be arbitrarily delayed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Bibin A Chundatt (JIRA)
Bibin A Chundatt created YARN-5815:
--

 Summary: Random failure 
TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
 Key: YARN-5815
 URL: https://issues.apache.org/jira/browse/YARN-5815
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Bibin A Chundatt
Assignee: Bibin A Chundatt


{noformat}
java.lang.AssertionError: expected:<2> but was:<0>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-02 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated YARN-5611:

Attachment: YARN-5611.0005.patch

Updated patch with following changes from previous patch
# Removed RMAppImpl state transition and made direct call to RMAppImpl. 
# Made updateTimeout API as transactional. 
# Fixed test cases failures. 
# Fixed review comment i.e removed application attribute class and storing in 
ApplicationStateData only.

Pending task 
# UpdateResponse is empty in current patch. As we discussed we need to send 
back response as updated timeout value. This I will add in next patches.
# Checkstyle issue will be handled.

> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5815:
---
Attachment: YARN-5815.0001.patch

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628944#comment-15628944
 ] 

Rohith Sharma K S commented on YARN-5815:
-

It look like after YARN-5773 is failing randomly.

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628987#comment-15628987
 ] 

Bibin A Chundatt commented on YARN-5815:


Thank you [~rohithsharma] for looking into issue.
Earlier even before NM registration its was expecting one app to be 
activated.When cluster resource is zero the active application will be zero in 
current implementation. After NM is registered the number of active apps will 
be 2  and pending will be 1 . Missed out case that NODE_ADDED event yet to be 
processed  by scheduler in testcase.

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628988#comment-15628988
 ] 

Sunil G commented on YARN-5815:
---

[~bibinchundatt] 
Could you please share the details of this. 

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5802) Application priority updates add pending apps to running ordering policy

2016-11-02 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15628992#comment-15628992
 ] 

Sunil G commented on YARN-5802:
---

Test case failure is tracked via YARN-5815. Latest patch looks fine for me. +1
I will commit the same tomorrow if there are no objections.

> Application priority updates add pending apps to running ordering policy
> 
>
> Key: YARN-5802
> URL: https://issues.apache.org/jira/browse/YARN-5802
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>Priority: Critical
> Attachments: YARN-5802.0001.patch, YARN-5802.0002.patch, 
> YARN-5802.0003.patch, YARN-5802.0004.patch, YARN-5802.0005.patch, 
> YARN-5802.0006.patch
>
>
> {{LeafQueue#updateApplicationPriority}}
> {code}
>  getOrderingPolicy().removeSchedulableEntity(attempt);
>   // Update new priority in SchedulerApplication
>   attempt.setPriority(newAppPriority);
>   getOrderingPolicy().addSchedulableEntity(attempt);
> {code}
> We should add again to ordering policy only when  attempt available in first 
> case.Else during application attempt removal will try to iterate on killed 
> application still available in pending Ordering policy.Which can cause RM to 
> crash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5816) TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still flakey

2016-11-02 Thread Daniel Templeton (JIRA)
Daniel Templeton created YARN-5816:
--

 Summary: 
TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still flakey
 Key: YARN-5816
 URL: https://issues.apache.org/jira/browse/YARN-5816
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager, test
Reporter: Daniel Templeton
Priority: Minor


Even after YARN-5057, 
TestDelegationTokenRenewer#testCancelWithMultipleAppSubmissions is still flakey:

{noformat}
Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 2.796 sec <<< 
FAILURE! - in 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer
testCancelWithMultipleAppSubmissions(org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer)
  Time elapsed: 2.307 sec  <<< FAILURE!
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer.testCancelWithMultipleAppSubmissions(TestDelegationTokenRenewer.java:1260)
{noformat}

Note that it's the same error as YARN-5057, but on a different line.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629027#comment-15629027
 ] 

Daniel Templeton commented on YARN-5694:


One of the unit test failures is YARN-5043.  The other looks a little like 
YARN-5057, except that YARN-5057 was already fixed.  I was able to reproduce 
the test failure without my patch applied, so I filed YARN-5816 for it.

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5721) NPE at AMRMClientImpl.getMatchingRequests

2016-11-02 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton resolved YARN-5721.

Resolution: Duplicate

Thanks for the patch [~szape]! [~haibochen] already beat you to it in 
YARN-5753, though.

> NPE at AMRMClientImpl.getMatchingRequests
> -
>
> Key: YARN-5721
> URL: https://issues.apache.org/jira/browse/YARN-5721
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
> Environment: Tested on Windows 10, in Dockerized Linux containers & 
> Ubuntu 16.04 with Java 7, Java 8.
>Reporter: Zoltán Zvara
>Priority: Blocker
>
> The following NPE was thrown using a Spark 2.1.0-SNAPSHOT (as client) by 
> changing Hadoop dependency to the latest (by the time the ERROR has been 
> generated).
> {{2016-10-10 11:33:53,392 ERROR yarn.ApplicationMaster: Uncaught exception: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.getMatchingRequests(AMRMClientImpl.java:668)
> at 
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.getMatchingRequests(AMRMClientImpl.java:651)
>   at 
> org.apache.spark.deploy.yarn.YarnAllocator.getPendingAtLocation(YarnAllocator.scala:210)
>   at 
> org.apache.spark.deploy.yarn.YarnAllocator.getPendingAllocate(YarnAllocator.scala:203)
>   at 
> org.apache.spark.deploy.yarn.YarnAllocator.updateResourceRequests(YarnAllocator.scala:318)
>   at 
> org.apache.spark.deploy.yarn.YarnAllocator.allocateResources(YarnAllocator.scala:278)
>   at 
> org.apache.spark.deploy.yarn.ApplicationMaster.registerAM(ApplicationMaster.scala:350)
>   at 
> org.apache.spark.deploy.yarn.ApplicationMaster.runExecutorLauncher(ApplicationMaster.scala:418)
>   at 
> org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:250)}}
> We've also pulled the latest code (1 hour ago) from the repository, and ran a 
> test for {{getMatchingRequests}}. Same NPE has been encountered.
> {{getMatchingRequests}} should never throw an NPE even if it has been called 
> right after the client has been started.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5811) ConfigurationProvider must implement Closeable interface

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629067#comment-15629067
 ] 

Daniel Templeton commented on YARN-5811:


[~bolshakov.de...@gmail.com], thanks for updating the patch.

The inner method pattern is a common one in YARN.  I don't disagree that it 
seems superfluous, but I would rather not rip it out here in an unrelated 
patch.  I you feel strongly about taking it out, let's open another JIRA to 
discuss it.

> ConfigurationProvider must implement Closeable interface
> 
>
> Key: YARN-5811
> URL: https://issues.apache.org/jira/browse/YARN-5811
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: yarn
>Reporter: Denis Bolshakov
>Priority: Minor
>  Labels: newbie
> Attachments: YARN-5811.1.patch, YARN-5811.3.patch, YARN-5811.5.patch, 
> YARN-5811.6.patch
>
>
> ConfigurationProvider declares close method, it would be so nice if the class 
> implements Closeable interface allowing to use `try with resources`



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629082#comment-15629082
 ] 

Hadoop QA commented on YARN-5815:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 35m 
43s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5815 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836563/YARN-5815.0001.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1c4bbbc81882 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / cb5cc0d |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13752/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13752/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch
>
>
> {noforma

[jira] [Commented] (YARN-5368) memory leak at timeline server

2016-11-02 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629088#comment-15629088
 ] 

Brahma Reddy Battula commented on YARN-5368:


[~jlowe]  and [~jeagles] thanks for your inputs.

bq. Do you have gc logging enabled for the nodemanager JVM? It would be 
interesting to know if it was trying to run one or more GC cycles during that 
time. If it wasn't GC cycles then I'm not sure how increased off-heap memory 
would directly contribute to slower resource localization unless the machine 
was near or at the point where it started swapping.

GC looks normal,and after applying the YARN-3491,ResourceLocalization is also 
normal. But for RES memory increase , need to try with [~jeagles] options. 

But I doubt {{db.compactRange(null, null);}}, file number is not decreasing 
after this. if we close and open the db then files are got deleted( not from 
the memory).


> memory leak at timeline server
> --
>
> Key: YARN-5368
> URL: https://issues.apache.org/jira/browse/YARN-5368
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineserver
>Affects Versions: 2.7.1
> Environment: HDP2.4
> CentOS 6.7
> jdk1.8.0_72
>Reporter: Wataru Yukawa
>
> memory usage of timeline server machine increases gradually.
> https://gyazo.com/952dad96c77ae053bae2e4d8c8ab0572
> please check since April.
> According to my investigation, timeline server used about 25GB.
> top command result
> {code}
> 90577 yarn  20   0 28.4g  25g  12m S  0.0 40.1   5162:53 
> /usr/java/jdk1.8.0_72/bin/java -Dproc_timelineserver -Xmx1024m 
> -Dhdp.version=2.4.0.0-169 -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn 
> -Dyarn.log.dir=/var/log/hadoop-yarn/yarn ...
> {code}
> ps command result
> {code}
> $ ps ww 90577
>  90577 ?Sl   5162:53 /usr/java/jdk1.8.0_72/bin/java 
> -Dproc_timelineserver -Xmx1024m -Dhdp.version=2.4.0.0-169 
> -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn 
> -Dyarn.log.dir=/var/log/hadoop-yarn/yarn 
> -Dhadoop.log.file=yarn-yarn-timelineserver-myhost.log 
> -Dyarn.log.file=yarn-yarn-timelineserver-myhost.log -Dyarn.home.dir= 
> -Dyarn.id.str=yarn -Dhadoop.root.logger=INFO,EWMA,RFA 
> -Dyarn.root.logger=INFO,EWMA,RFA 
> -Djava.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
>  -Dyarn.policy.file=hadoop-policy.xml 
> -Djava.io.tmpdir=/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir 
> -Dhadoop.log.dir=/var/log/hadoop-yarn/yarn 
> -Dyarn.log.dir=/var/log/hadoop-yarn/yarn 
> -Dhadoop.log.file=yarn-yarn-timelineserver-myhost.log 
> -Dyarn.log.file=yarn-yarn-timelineserver-myhost.log 
> -Dyarn.home.dir=/usr/hdp/current/hadoop-yarn-timelineserver 
> -Dhadoop.home.dir=/usr/hdp/2.4.0.0-169/hadoop 
> -Dhadoop.root.logger=INFO,EWMA,RFA -Dyarn.root.logger=INFO,EWMA,RFA 
> -Djava.library.path=:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir:/usr/hdp/2.4.0.0-169/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.0.0-169/hadoop/lib/native:/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir
>  -classpath 
> /usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/lib/*:/usr/hdp/2.4.0.0-169/hadoop/.//*:/usr/hdp/2.4.0.0-169/hadoop-hdfs/./:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/*:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//*:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/*:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/*:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//*::/usr/hdp/2.4.0.0-169/tez/*:/usr/hdp/2.4.0.0-169/tez/lib/*:/usr/hdp/2.4.0.0-169/tez/conf:/usr/hdp/2.4.0.0-169/tez/*:/usr/hdp/2.4.0.0-169/tez/lib/*:/usr/hdp/2.4.0.0-169/tez/conf:/usr/hdp/current/hadoop-yarn-timelineserver/.//*:/usr/hdp/current/hadoop-yarn-timelineserver/lib/*:/usr/hdp/2.4.0.0-169/hadoop/conf/timelineserver-config/log4j.properties
>  
> org.apache.hadoop.yarn.server.applicationhistoryservice.ApplicationHistoryServer
> {code}
>  
> Alghough I set -Xmx1024m, actual memory usage is 25GB.
> After I restart timeline server, memory usage of timeline server machine 
> decreases.
> https://gyazo.com/130600c17a7d41df8606727a859ae7e3
> Now timelineserver uses less than 1GB memory.
> top command result
> {code}
>  6163 yarn  20   0 3959m 783m  46m S  0.3  1.2   3:37.60 
> /usr/java/jdk1.8.0_72/bin/java -Dproc_timelineserver -Xmx1024m 
> -Dhdp.version=2.4.0.0-169 ...
> {code}
> I suspect memory leak at timeline server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (YARN-4862) Handle duplicate completed containers in RMNodeImpl

2016-11-02 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629078#comment-15629078
 ] 

Jason Lowe commented on YARN-4862:
--

I don't think we need to worry too much about optimizing the case where the RM 
doesn't know about an application corresponding to a NM container status since 
that should be a relatively rare event.

Test failure is unrelated and tracked by YARN-5548.

+1 for the latest patch.  I'll commit this later today if there are no 
objections.

> Handle duplicate completed containers in RMNodeImpl
> ---
>
> Key: YARN-4862
> URL: https://issues.apache.org/jira/browse/YARN-4862
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
> Attachments: 0001-YARN-4862.patch, 0002-YARN-4862.patch, 
> 0003-YARN-4862.patch, YARN-4862-004.patch, YARN-4862-005.patch, 
> YARN-4862-006.patch
>
>
> As per 
> [comment|https://issues.apache.org/jira/browse/YARN-4852?focusedCommentId=15209689&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15209689]
>  from [~sharadag], there should be safe guard for duplicated container status 
> in RMNodeImpl before creating UpdatedContainerInfo. 
> Or else in heavily loaded cluster where event processing is gradually slow, 
> if any duplicated container are sent to RM(may be bug in NM also), there is 
> significant impact that RMNodImpl always create UpdatedContainerInfo for 
> duplicated containers. This result in increase in the heap memory and causes 
> problem like YARN-4852.
> This is an optimization for issue kind YARN-4852



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5815:
---
Attachment: YARN-5815.0002.patch

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629115#comment-15629115
 ] 

Varun Saxena commented on YARN-5815:


We are draining dispatcher just before the new assertions but that does not 
guarantee that scheduler event is processed as scheduler events go into another 
queue. IIUC, sandflee was attempting to solve this in YARN-5375 because 
currently there is no mechanism to wait till ensure scheduler event is 
processed. This leads to random test failures for several test cases. We can 
just wait on an event expected to be sent to RMApp or attempt, which wont be 
the case here. 

As of now to fix this, we can simply loop over the condition and sleep till a 
certain timeout.

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629155#comment-15629155
 ] 

Varun Saxena commented on YARN-5815:


[~bibinchundatt], thanks for the patch.
# Shouldn't we have another loop checking for the conditions which are failing 
instead of moving the loop down ?
# Also sleep in the loop should be set to a lower value than 500 ms.

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-5815:
---
Attachment: YARN-5815.0003.patch

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch, 
> YARN-5815.0003.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629193#comment-15629193
 ] 

Bibin A Chundatt commented on YARN-5815:


[~varun_saxena]
Sleep reduced to 50 sleep 50 times. 
{quote}
Shouldn't we have another loop checking for the conditions which are failing 
instead of moving the loop down ?
{quote}
Had found that issue.Attached patch handling comments. 

> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch, 
> YARN-5815.0003.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629194#comment-15629194
 ] 

Rohith Sharma K S commented on YARN-4498:
-

It looks like an issue in branch-2.8. [~bibinchundatt] could you check it why 
null is coming? And also can you update a patch if you know the reason?
cc:/ [~Naganarasimha]

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629233#comment-15629233
 ] 

Billie Rinaldi commented on YARN-5808:
--

We haven't modified yarn.cmd for slider or services-api yet. Let's open a 
separate ticket.

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629277#comment-15629277
 ] 

Hadoop QA commented on YARN-5815:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 31 unchanged - 1 fixed = 31 total (was 32) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 38m 41s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 48s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestResourceTrackerService |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5815 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836584/YARN-5815.0002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2ae6f2d8e5aa 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0dc2a6a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13753/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13753/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13753/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was 

[jira] [Commented] (YARN-5587) Add support for resource profiles

2016-11-02 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629384#comment-15629384
 ] 

Varun Vasudev commented on YARN-5587:
-

{quote}
I'm not quite sure about how to apply override capacity, will it override all 
the fields or only non-zero fields: For example, assume small profile is <4G, 1 
vcore, 1 disk>, and override is <8G, 2 vcore>, what should be the final 
resource? <8G, 2 vcore, 0 disk> or <8G, 2 vcore, 1 disk>?
{quote}
The final resource will be <8G, 2 vcore, 1 disk>. The override is meant to let 
you override individual resource types. In practice, this will mean that users 
can override memory and vcores since those are the only resource types AMs 
currently expose to the user.

{quote}
>From the ProfileCapability#toResource, it is the latter one. Probably it is 
>better to add a javadocs to ProfileCapability#getProfileCapabilityOverride. I 
>cannot find clear explanations from Javadoc of ProfileCapability.
{quote}
Fair point - I'll fix the docs.

bq. My last question is, after this patch goes in, which patches are required 
for end-to-end feature completed and testable?

After this patch, only changes to the distributed shell AM and MR AM are 
required to test the feature end to end.

If the current approach is ok by you, I'll clean up the patch and upload a new 
one.


> Add support for resource profiles
> -
>
> Key: YARN-5587
> URL: https://issues.apache.org/jira/browse/YARN-5587
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Varun Vasudev
>Assignee: Varun Vasudev
>  Labels: oct16-hard
> Attachments: YARN-5587-YARN-3926.001.patch, 
> YARN-5587-YARN-3926.002.patch, YARN-5587-YARN-3926.003.patch, 
> YARN-5587-YARN-3926.004.patch, YARN-5587-YARN-3926.005.patch, 
> YARN-5587-YARN-3926.006.patch, YARN-5587-YARN-3926.007.patch, 
> YARN-5587-YARN-3926.008.patch
>
>
> Add support for resource profiles on the RM side to allow users to use 
> shorthands to specify resource requirements.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5258) Document Use of Docker with LinuxContainerExecutor

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629472#comment-15629472
 ] 

Daniel Templeton commented on YARN-5258:


Can I get some review love, [~shaneku...@gmail.com], [~varun_saxena], 
[~tangzhankun], [~sidharta-s]?  Let's get this in now that the 
{{DockerContainerExecutor}} is gone in trunk and deprecated in branch-2 
(YARN-5388).

> Document Use of Docker with LinuxContainerExecutor
> --
>
> Key: YARN-5258
> URL: https://issues.apache.org/jira/browse/YARN-5258
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-easy
> Attachments: YARN-5258.001.patch, YARN-5258.002.patch
>
>
> There aren't currently any docs that explain how to configure Docker and all 
> of its various options aside from reading all of the JIRAs.  We need to 
> document the configuration, use, and troubleshooting, along with helpful 
> examples.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629479#comment-15629479
 ] 

Hadoop QA commented on YARN-5815:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 0 new + 31 unchanged - 1 fixed = 31 total (was 32) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 
34s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5815 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836588/YARN-5815.0003.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 6025450c7c93 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0dc2a6a |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13754/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13754/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Random failure 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> -
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type

[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629515#comment-15629515
 ] 

Gour Saha commented on YARN-5808:
-

Ok, makes sense.

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629545#comment-15629545
 ] 

Bibin A Chundatt edited comment on YARN-4498 at 11/2/16 4:46 PM:
-

Thank you [~rohithsharma]
With [~Naganarasimha] had discussed this offline and was planning to handle the 
same.
Reopening the issue to handle null shown in json . For empty arrayList in case 
of 2.8 its returning null json. IIUC in case of json null is also considered 
empty. For making both trunk and 2.8 in sync will avoid resourceInfo for 
finished apps.


was (Author: bibinchundatt):
Thank you [~rohithsharma]
Will [~Naganarasimha] had discussed this offline and was planning to handle the 
same.
Reopening the issue to handle null shown in json . For empty arrayList in case 
of 2.8 its returning null json. IIUC in case of json null is also considered 
empty. For making both trunk and 2.8 in sync will avoid resourceInfo for 
finished apps.

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt reopened YARN-4498:


Thank you [~rohithsharma]
Will [~Naganarasimha] had discussed this offline and was planning to handle the 
same.
Reopening the issue to handle null shown in json . For empty arrayList in case 
of 2.8 its returning null json. IIUC in case of json null is also considered 
empty. For making both trunk and 2.8 in sync will avoid resourceInfo for 
finished apps.

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: YARN-4498.branch-2.8.addendum.001.patch
YARN-4498.trunk.addendum.001.patch

Attaching patch for the same

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.branch-2.8.addendum.001.patch, YARN-4498.trunk.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5817) Make yarn.cmd changes required for slider and servicesapi

2016-11-02 Thread Gour Saha (JIRA)
Gour Saha created YARN-5817:
---

 Summary: Make yarn.cmd changes required for slider and servicesapi
 Key: YARN-5817
 URL: https://issues.apache.org/jira/browse/YARN-5817
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Gour Saha
 Fix For: yarn-native-services


As per YARN-5808 and other changes made to yarn script, there are probably some 
corresponding changes required in 
_hadoop-yarn-project/hadoop-yarn/bin/yarn.cmd_. We need to identify and make 
those changes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Gour Saha (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629582#comment-15629582
 ] 

Gour Saha commented on YARN-5808:
-

Changed state to "submit patch". Will wait for the QA report. I opened 
YARN-5817 for the yarn.cmd change.

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5618) Support for Intra queue preemption framework

2016-11-02 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5618.
---
Resolution: Fixed

Closing this Jira as DONE as this change also went along with YARN-2009

> Support for Intra queue preemption framework
> 
>
> Key: YARN-5618
> URL: https://issues.apache.org/jira/browse/YARN-5618
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Sunil G
>Assignee: Sunil G
>
> Currently inter-queue preemption framework covers the basics (configs and 
> scheduling monitor interval etc). This new framework will come as new 
> CandidateSelector policy. Priority and user-limit will be a part of this 
> framework.
> This is a tracking jira for the framework impl alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629632#comment-15629632
 ] 

Hadoop QA commented on YARN-5808:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
14s{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  3m 
14s{color} | {color:red} hadoop-yarn in yarn-native-services failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  3m  
9s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
13s{color} | {color:green} The patch generated 0 new + 90 unchanged - 1 fixed = 
90 total (was 91) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
12s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
21s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836404/YARN-5808-yarn-native-services.001.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  mvnsite  unit  |
| uname | Linux f9719405415e 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 87f09be |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13755/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
 |
| shellcheck | v0.4.4 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13755/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13755/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/13755/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13755/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5618) Support for Intra queue preemption framework

2016-11-02 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G resolved YARN-5618.
---
Resolution: Done

> Support for Intra queue preemption framework
> 
>
> Key: YARN-5618
> URL: https://issues.apache.org/jira/browse/YARN-5618
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Sunil G
>Assignee: Sunil G
>
> Currently inter-queue preemption framework covers the basics (configs and 
> scheduling monitor interval etc). This new framework will come as new 
> CandidateSelector policy. Priority and user-limit will be a part of this 
> framework.
> This is a tracking jira for the framework impl alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Reopened] (YARN-5618) Support for Intra queue preemption framework

2016-11-02 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G reopened YARN-5618:
---

Changing resolution status.

> Support for Intra queue preemption framework
> 
>
> Key: YARN-5618
> URL: https://issues.apache.org/jira/browse/YARN-5618
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacityscheduler
>Reporter: Sunil G
>Assignee: Sunil G
>
> Currently inter-queue preemption framework covers the basics (configs and 
> scheduling monitor interval etc). This new framework will come as new 
> CandidateSelector policy. Priority and user-limit will be a part of this 
> framework.
> This is a tracking jira for the framework impl alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: (was: YARN-4498.branch-2.8.addendum.001.patch)

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.trunk.addendum.001.patch, apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: (was: YARN-4498.trunk.addendum.001.patch)

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.trunk.addendum.001.patch, apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: YARN-4498.trunk.addendum.001.patch

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.trunk.addendum.001.patch, apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629646#comment-15629646
 ] 

Hadoop QA commented on YARN-5611:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 41s{color} | {color:orange} root: The patch generated 24 new + 700 unchanged 
- 3 fixed = 724 total (was 703) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 11 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
4s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api 
generated 2 new + 123 unchanged - 0 fixed = 125 total (was 123) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch 
failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
23s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
51s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 35m 30s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}104m 
52s{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Fa

[jira] [Updated] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Bibin A Chundatt (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bibin A Chundatt updated YARN-4498:
---
Attachment: YARN-4498.branch-2.8.addendum.001.patch

> Application level node labels stats to be available in REST
> ---
>
> Key: YARN-4498
> URL: https://issues.apache.org/jira/browse/YARN-4498
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: api, client, resourcemanager
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
>  Labels: oct16-medium
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: 0001-YARN-4498.patch, YARN-4498.0002.patch, 
> YARN-4498.0003.patch, YARN-4498.0004.patch, YARN-4498.branch-2.8.0001.patch, 
> YARN-4498.branch-2.8.addendum.001.patch, YARN-4498.trunk.addendum.001.patch, 
> apps.xml
>
>
> Currently nodelabel stats per application is not available through REST like 
> currently used labels by all live containers, total stats of containers per 
> label for app etc..
> CLI and web UI scenarios will be handled separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5818) Support the Docker Live Restore feature

2016-11-02 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-5818:
-

 Summary: Support the Docker Live Restore feature
 Key: YARN-5818
 URL: https://issues.apache.org/jira/browse/YARN-5818
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: yarn
Reporter: Shane Kumpf


Docker 1.12.x introduced the docker [Live 
Restore|https://docs.docker.com/engine/admin/live-restore/] feature which 
allows docker containers to survive docker daemon restarts/upgrades. Support 
for this feature should be added to YARN to allow docker changes and upgrades 
to be less impactful to existing containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629692#comment-15629692
 ] 

Allen Wittenauer commented on YARN-5808:


Also, that path needs to get cygwin'd.

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5818) Support the Docker Live Restore feature

2016-11-02 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629691#comment-15629691
 ] 

Shane Kumpf commented on YARN-5818:
---

Did some initial testing here and unfortunately, given that docker is a 
client/server model, when the docker daemon is down for restart/upgrade, client 
operations fail with an EOF exception. Our use of {{docker wait}} for 
retrieving the containers exit code breaks down as the client operation 
failures during the restart/upgrade.
{code}
An error occurred trying to connect: Post 
http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/c11692777816e44049d610c4ad358a24eefbff707cdbd85c24df3d153c80401e/wait:
 EOF
{code}

The docker community believes this is working as intended and does not plan to 
fix this behavior. It appears we will have to handle retries in c-e.

> Support the Docker Live Restore feature
> ---
>
> Key: YARN-5818
> URL: https://issues.apache.org/jira/browse/YARN-5818
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>
> Docker 1.12.x introduced the docker [Live 
> Restore|https://docs.docker.com/engine/admin/live-restore/] feature which 
> allows docker containers to survive docker daemon restarts/upgrades. Support 
> for this feature should be added to YARN to allow docker changes and upgrades 
> to be less impactful to existing containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629688#comment-15629688
 ] 

Allen Wittenauer commented on YARN-5808:


Is there a reason this patch is directly appending instead of using 
hadoop_add_param'ing the slider.libdir?

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-02 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629737#comment-15629737
 ] 

Jian He commented on YARN-5694:
---

sorry, I don't quite get it. why is the active thread needed to run in non-HA 
mode ? 
Even in HA mode, it may not be needed, because the curator library can 
automatically detect whether the RM is still active and send notification.  
IIUC, no need a separate thread to detect that 

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-02 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629737#comment-15629737
 ] 

Jian He edited comment on YARN-5694 at 11/2/16 5:32 PM:


sorry, I don't quite get it. why is the active thread needed to run in non-HA 
mode ? 
Even in HA mode, it may not be needed, because the curator leader election 
library can automatically detect whether the RM is still active and send 
notification.  IIUC, no need a separate thread to detect that 


was (Author: jianhe):
sorry, I don't quite get it. why is the active thread needed to run in non-HA 
mode ? 
Even in HA mode, it may not be needed, because the curator library can 
automatically detect whether the RM is still active and send notification.  
IIUC, no need a separate thread to detect that 

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629790#comment-15629790
 ] 

Billie Rinaldi commented on YARN-5808:
--

Nope, I'm just learning how the new scripts work. Love it, btw.

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629793#comment-15629793
 ] 

Billie Rinaldi commented on YARN-5808:
--

Does this mean I should use hadoop_translate_cygwin_path?

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5694) ZKRMStateStore should always start its verification thread to prevent accidental state store corruption

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15629870#comment-15629870
 ] 

Daniel Templeton commented on YARN-5694:


The concern is that the leader election and state store can be configured to 
use different ZK instances in HA mode.  In that case, the state store still has 
to protect itself.  In non-HA, it may still be possible for a second RM to 
start using the same cluster ID and same ZK instance, which would corrupt the 
state store.  By having the state store be always vigilant, we protect 
ourselves from state store corruption in all cases.

> ZKRMStateStore should always start its verification thread to prevent 
> accidental state store corruption
> ---
>
> Key: YARN-5694
> URL: https://issues.apache.org/jira/browse/YARN-5694
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
>  Labels: oct16-medium
> Attachments: YARN-5694.001.patch, YARN-5694.002.patch, 
> YARN-5694.003.patch, YARN-5694.004.patch, YARN-5694.004.patch, 
> YARN-5694.005.patch, YARN-5694.006.patch, YARN-5694.007.patch, 
> YARN-5694.branch-2.7.001.patch, YARN-5694.branch-2.7.002.patch
>
>
> There are two cases.  In branch-2.7, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is always started, even when 
> using embedded or Curator failover.  In branch-2.8, the 
> {{ZKRMStateStore.VerifyActiveStatusThread}} is only started when HA is 
> disabled, which makes no sense.  Based on the JIRA that introduced that 
> change (YARN-4559), I believe the intent was to start it only when embedded 
> failover is disabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Billie Rinaldi (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Billie Rinaldi updated YARN-5808:
-
Attachment: YARN-5808-yarn-native-services.002.patch

Attaching a new patch addressing. [~aw]'s comments. Thanks for the review, 
Allen!

> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch, 
> YARN-5808-yarn-native-services.002.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5819) Verify preemption works between applications in the same leaf queue

2016-11-02 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5819:
--

 Summary: Verify preemption works between applications in the same 
leaf queue
 Key: YARN-5819
 URL: https://issues.apache.org/jira/browse/YARN-5819
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: fairscheduler
Affects Versions: 2.9.0
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


JIRA to track the unit test(s) for tracking preemption between applications in 
the same queue. Note that this can only be fairshare preemption



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5808) Add gc log options to the yarn daemon script when starting services-api

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630062#comment-15630062
 ] 

Hadoop QA commented on YARN-5808:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
56s{color} | {color:green} yarn-native-services passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  3m 
11s{color} | {color:red} hadoop-yarn in yarn-native-services failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  3m  
6s{color} | {color:red} hadoop-yarn in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
12s{color} | {color:green} The patch generated 0 new + 90 unchanged - 1 fixed = 
90 total (was 91) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
11s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
20s{color} | {color:red} The patch generated 11 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5808 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836633/YARN-5808-yarn-native-services.002.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  mvnsite  unit  |
| uname | Linux f8e177842387 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | yarn-native-services / 87f09be |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13757/artifact/patchprocess/branch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
 |
| shellcheck | v0.4.4 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-YARN-Build/13757/artifact/patchprocess/patch-mvnsite-hadoop-yarn-project_hadoop-yarn.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13757/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-YARN-Build/13757/artifact/patchprocess/patch-asflicense-problems.txt
 |
| modules | C: hadoop-yarn-project/hadoop-yarn U: 
hadoop-yarn-project/hadoop-yarn |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13757/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add gc log options to the yarn daemon script when starting services-api
> ---
>
> Key: YARN-5808
> URL: https://issues.apache.org/jira/browse/YARN-5808
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Gour Saha
>Assignee: Billie Rinaldi
> Fix For: yarn-native-services
>
> Attachments: YARN-5808-yarn-native-services.001.patch, 
> YARN-5808-yarn-native-services.002.patch
>
>
> We need to add the gc log options as below when starting services-api using 
> the yarn-daemon.sh script -
> {code}
> -XX:+PrintGC -Xloggc:$YARN_LOG_DIR/services-api-gc.log -XX:+PrintGCDetails 
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5815) Random failure of TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5815:
---
Summary: Random failure of 
TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart  
(was: Random failure 
TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart)

> Random failure of 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> 
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch, 
> YARN-5815.0003.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5336) Put in some limit for accepting key-values in hbase writer

2016-11-02 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630081#comment-15630081
 ] 

Vrushali C commented on YARN-5336:
--

Hi [~haibochen] 
Wanted to check in, are you actively working on this?  If not, I actually am 
looking at a related thing and wanted to put up a patch for this.

thanks
Vrushali

> Put in some limit for accepting key-values in hbase writer
> --
>
> Key: YARN-5336
> URL: https://issues.apache.org/jira/browse/YARN-5336
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
>
> As recommended by [~jrottinghuis] , need to add in some limit (default and 
> configurable) for accepting key values to be written to the backend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure of TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630080#comment-15630080
 ] 

Varun Saxena commented on YARN-5815:


Thanks [~bibinchundatt] for the latest patch.
LGTM. Will commit it shortly.

Apologies for missing it during review of YARN-5773.

> Random failure of 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> 
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch, 
> YARN-5815.0003.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5336) Put in some limit for accepting key-values in hbase writer

2016-11-02 Thread Haibo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haibo Chen updated YARN-5336:
-
Assignee: Vrushali C  (was: Haibo Chen)

> Put in some limit for accepting key-values in hbase writer
> --
>
> Key: YARN-5336
> URL: https://issues.apache.org/jira/browse/YARN-5336
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> As recommended by [~jrottinghuis] , need to add in some limit (default and 
> configurable) for accepting key values to be written to the backend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5336) Put in some limit for accepting key-values in hbase writer

2016-11-02 Thread Haibo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630097#comment-15630097
 ] 

Haibo Chen commented on YARN-5336:
--

Hey, [~vrushalic]. I have not got any change to work on this. Assigning it to 
you.

> Put in some limit for accepting key-values in hbase writer
> --
>
> Key: YARN-5336
> URL: https://issues.apache.org/jira/browse/YARN-5336
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Haibo Chen
>  Labels: YARN-5355
>
> As recommended by [~jrottinghuis] , need to add in some limit (default and 
> configurable) for accepting key values to be written to the backend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure of TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630131#comment-15630131
 ] 

Hudson commented on YARN-5815:
--

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10754 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10754/])
YARN-5815. Random failure of (varunsaxena: rev 
377919010b687dbf95f62082201cf91f5a7a2318)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestApplicationPriority.java


> Random failure of 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> 
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch, 
> YARN-5815.0003.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5815) Random failure of TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart

2016-11-02 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630133#comment-15630133
 ] 

Varun Saxena commented on YARN-5815:


Committed to trunk, branch-2 and branch-2.8
Thanks [~bibinchundatt] for your contribution and [~rohithsharma] and [~sunilg] 
for reviews.

> Random failure of 
> TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart
> 
>
> Key: YARN-5815
> URL: https://issues.apache.org/jira/browse/YARN-5815
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Bibin A Chundatt
>Assignee: Bibin A Chundatt
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: YARN-5815.0001.patch, YARN-5815.0002.patch, 
> YARN-5815.0003.patch
>
>
> {noformat}
> java.lang.AssertionError: expected:<2> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationPriority.testOrderOfActivatingThePriorityApplicationOnRMRestart(TestApplicationPriority.java:707)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4498) Application level node labels stats to be available in REST

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630348#comment-15630348
 ] 

Hadoop QA commented on YARN-4498:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 7s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
38s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} branch-2.8 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} branch-2.8 passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} branch-2.8 passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 15s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 44 unchanged - 1 fixed = 45 total (was 45) {color} 
|
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 45s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed with JDK 
v1.7.0_111. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m  5s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_101 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
| JDK v1.7.0_111 Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestClientRMTokens |
|   | hadoop.yarn.server.resourcemanager.TestAMAuthorization |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5af2af1 |
| JIRA Issue | YARN-4498 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836609/YARN-4

[jira] [Created] (YARN-5820) yarn node CLI help should be clearer

2016-11-02 Thread Grant Sohn (JIRA)
Grant Sohn created YARN-5820:


 Summary: yarn node CLI help should be clearer
 Key: YARN-5820
 URL: https://issues.apache.org/jira/browse/YARN-5820
 Project: Hadoop YARN
  Issue Type: Bug
  Components: client
Affects Versions: 2.6.0
Reporter: Grant Sohn
Priority: Trivial


Current message is:
{noformat}
usage: node
 -all   Works with -list to list all nodes.
 -list  List all running nodes. Supports optional use of
-states to filter nodes based on node state, all -all
to list all nodes.
 -statesWorks with -list to filter nodes based on input
comma-separated list of node states.
 -statusPrints the status report of the node.
{noformat}

It should be either this:
{noformat}
usage: yarn node [-list [-states |-all] | -status ]

 -all   Works with -list to list all nodes.
 -list  List all running nodes. Supports optional use of
-states to filter nodes based on node state, all -all
to list all nodes.
 -statesWorks with -list to filter nodes based on input
comma-separated list of node states.
 -statusPrints the status report of the node.
{noformat}

or that.
{noformat}
usage: yarn node -list [-states |-all] 
   yarn node -status 

 -all   Works with -list to list all nodes.
 -list  List all running nodes. Supports optional use of
-states to filter nodes based on node state, all -all
to list all nodes.
 -statesWorks with -list to filter nodes based on input
comma-separated list of node states.
 -statusPrints the status report of the node.
{noformat}

The latter is the least ambiguous.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5783) Unit tests to verify the identification of starved applications

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630385#comment-15630385
 ] 

Daniel Templeton commented on YARN-5783:


Thanks, [~kasha].  I think we're close.

* Does {{totalAppsEverAdded()}} need to be public, or would default privacy do? 
 Same for {{numStarvedApps()}}.
* Can the {{resourceManager.stop()}} call in {{TestFSAppStarvation.tearDown()}} 
throw an exception that would prevent the deletion of the {{ALLOC_FILE}}?
* Is it worth adding a test to make sure that the same app can be starved 
multiple times in a row?


> Unit tests to verify the identification of starved applications
> ---
>
> Key: YARN-5783
> URL: https://issues.apache.org/jira/browse/YARN-5783
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Affects Versions: 2.8.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>  Labels: oct16-medium
> Attachments: yarn-5783.YARN-4752.1.patch, 
> yarn-5783.YARN-4752.2.patch, yarn-5783.YARN-4752.3.patch, 
> yarn-5783.YARN-4752.4.patch
>
>
> JIRA to track unit tests to verify the identification of starved 
> applications. An application should be marked starved only when:
> # Cluster allocation is over the configured threshold for preemption.
> # Preemption is enabled for a queue and any of the following:
> ## The queue is under its minshare for longer than minsharePreemptionTimeout
> ## One of the queue’s applications is under its fairshare for longer than 
> fairsharePreemptionTimeout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5336) Put in some limit for accepting key-values in hbase writer

2016-11-02 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630472#comment-15630472
 ] 

Vrushali C commented on YARN-5336:
--

Some other interesting points to keep in mind:

As per https://hbase.apache.org/book.html#table_schema_rules_of_thumb , we 
should aim to have cells no larger than 10 MB, or 50 MB if we use mob. 
Otherwise, consider storing your cell data in HDFS and store a pointer to the 
data in HBase.

Aim to have regions sized between 10 and 50 GB.

Aim to have cells no larger than 10 MB, or 50 MB if you use mob. Otherwise, 
consider storing your cell data in HDFS and store a pointer to the data in 
HBase.

A typical schema has between 1 and 3 column families per table. HBase tables 
should not be designed to mimic RDBMS tables.

Around 50-100 regions is a good number for a table with 1 or 2 column families. 
Remember that a region is a contiguous segment of a column family.

Keep your column family names as short as possible. The column family names are 
stored for every value (ignoring prefix encoding). They should not be 
self-documenting and descriptive like in a typical RDBMS.

About Medium sized objects (https://hbase.apache.org/book.html#hbase_mob)

While HBase can technically handle binary objects with cells that are larger 
than 100 KB in size, HBase’s normal read and write paths are optimized for 
values smaller than 100KB in size. When HBase deals with large numbers of 
objects over this threshold, referred to here as medium objects, or MOBs, 
performance is degraded due to write amplification caused by splits and 
compactions. When using MOBs, ideally your objects will be between 100KB and 
10MB. HBase FIX_VERSION_NUMBER adds support for better managing large numbers 
of MOBs while maintaining performance, consistency, and low operational 
overhead. MOB support is provided by the work done in HBASE-11339. To take 
advantage of MOB, you need to use HFile version 3. Optionally, configure the 
MOB file reader’s cache settings for each RegionServer (see Configuring the MOB 
Cache), then configure specific columns to hold MOB data. Client code does not 
need to change to take advantage of HBase MOB support. The feature is 
transparent to the client.



> Put in some limit for accepting key-values in hbase writer
> --
>
> Key: YARN-5336
> URL: https://issues.apache.org/jira/browse/YARN-5336
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> As recommended by [~jrottinghuis] , need to add in some limit (default and 
> configurable) for accepting key values to be written to the backend.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (YARN-5336) Put in some limit for accepting key-values in hbase writer

2016-11-02 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630472#comment-15630472
 ] 

Vrushali C edited comment on YARN-5336 at 11/2/16 8:57 PM:
---

Some other interesting points to keep in mind:

As per https://hbase.apache.org/book.html#table_schema_rules_of_thumb , we 
should aim to have cells no larger than 10 MB, or 50 MB if we use mob. 
Otherwise, consider storing your cell data in HDFS and store a pointer to the 
data in HBase.

Aim to have regions sized between 10 and 50 GB.

Aim to have cells no larger than 10 MB, or 50 MB if you use mob. Otherwise, 
consider storing your cell data in HDFS and store a pointer to the data in 
HBase.

A typical schema has between 1 and 3 column families per table. HBase tables 
should not be designed to mimic RDBMS tables. Around 50-100 regions is a good 
number for a table with 1 or 2 column families. Remember that a region is a 
contiguous segment of a column family.

Keep your column family names as short as possible. The column family names are 
stored for every value (ignoring prefix encoding). They should not be 
self-documenting and descriptive like in a typical RDBMS.

About Medium sized objects (https://hbase.apache.org/book.html#hbase_mob)

While HBase can technically handle binary objects with cells that are larger 
than 100 KB in size, HBase’s normal read and write paths are optimized for 
values smaller than 100KB in size. When HBase deals with large numbers of 
objects over this threshold, referred to here as medium objects, or MOBs, 
performance is degraded due to write amplification caused by splits and 
compactions. When using MOBs, ideally your objects will be between 100KB and 
10MB. HBase FIX_VERSION_NUMBER adds support for better managing large numbers 
of MOBs while maintaining performance, consistency, and low operational 
overhead. MOB support is provided by the work done in HBASE-11339. To take 
advantage of MOB, you need to use HFile version 3. Optionally, configure the 
MOB file reader’s cache settings for each RegionServer (see Configuring the MOB 
Cache), then configure specific columns to hold MOB data. Client code does not 
need to change to take advantage of HBase MOB support. The feature is 
transparent to the client.




was (Author: vrushalic):
Some other interesting points to keep in mind:

As per https://hbase.apache.org/book.html#table_schema_rules_of_thumb , we 
should aim to have cells no larger than 10 MB, or 50 MB if we use mob. 
Otherwise, consider storing your cell data in HDFS and store a pointer to the 
data in HBase.

Aim to have regions sized between 10 and 50 GB.

Aim to have cells no larger than 10 MB, or 50 MB if you use mob. Otherwise, 
consider storing your cell data in HDFS and store a pointer to the data in 
HBase.

A typical schema has between 1 and 3 column families per table. HBase tables 
should not be designed to mimic RDBMS tables.

Around 50-100 regions is a good number for a table with 1 or 2 column families. 
Remember that a region is a contiguous segment of a column family.

Keep your column family names as short as possible. The column family names are 
stored for every value (ignoring prefix encoding). They should not be 
self-documenting and descriptive like in a typical RDBMS.

About Medium sized objects (https://hbase.apache.org/book.html#hbase_mob)

While HBase can technically handle binary objects with cells that are larger 
than 100 KB in size, HBase’s normal read and write paths are optimized for 
values smaller than 100KB in size. When HBase deals with large numbers of 
objects over this threshold, referred to here as medium objects, or MOBs, 
performance is degraded due to write amplification caused by splits and 
compactions. When using MOBs, ideally your objects will be between 100KB and 
10MB. HBase FIX_VERSION_NUMBER adds support for better managing large numbers 
of MOBs while maintaining performance, consistency, and low operational 
overhead. MOB support is provided by the work done in HBASE-11339. To take 
advantage of MOB, you need to use HFile version 3. Optionally, configure the 
MOB file reader’s cache settings for each RegionServer (see Configuring the MOB 
Cache), then configure specific columns to hold MOB data. Client code does not 
need to change to take advantage of HBase MOB support. The feature is 
transparent to the client.



> Put in some limit for accepting key-values in hbase writer
> --
>
> Key: YARN-5336
> URL: https://issues.apache.org/jira/browse/YARN-5336
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelineserver
>Reporter: Vrushali C
>Assignee: Vrushali C
>  Labels: YARN-5355
>
> As recommended by [~jrottinghuis] , need to add in some limit (default an

[jira] [Commented] (YARN-5774) MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set yarn.scheduler.minimum-allocation-mb to 0.

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630486#comment-15630486
 ] 

Daniel Templeton commented on YARN-5774:


Thanks, [~yufeigu].  In addition to [~miklos.szeg...@cloudera.com]'s comments, 
I have a couple of minor points:

* {{AbstractYarnScheduler.normailzeRequest(List<...> ask...)}} should be 
{{normalizeRequests()}} to avoid confusion.
* While you're in there, you may as well correct the typo (hte/the) in the 
javadoc for {{ResourceCalculator.normalize()}}
* To add onto [~miklos.szeg...@cloudera.com]'s comments, 
{{ResourceCalculator.normalize()}} should check memory and CPU independently.  
Also, I think you can leave out the 0 check in 
{{SchedulerUtils.normalizeRequest()}} since it's redundant.
* Is throwing an exception the right thing to do if the min allocation is 0?  
Looks to me like that exception my be pretty hard to diagnose.

> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler if set 
> yarn.scheduler.minimum-allocation-mb to 0.
> 
>
> Key: YARN-5774
> URL: https://issues.apache.org/jira/browse/YARN-5774
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Yufei Gu
>Assignee: Yufei Gu
>  Labels: oct16-easy
> Attachments: YARN-5774.001.patch, YARN-5774.002.patch, 
> YARN-5774.003.patch
>
>
> MR Job stuck in ACCEPTED status without any progress in Fair Scheduler 
> because there is no resource request for the AM. This happened when you 
> configure {{yarn.scheduler.minimum-allocation-mb}} to zero.
> The problem is in the code used by both Capacity Scheduler and Fair 
> Scheduler. {{scheduler.increment-allocation-mb}} is a concept in FS, but not 
> CS. So the common code in class RMAppManager passes the 
> {{yarn.scheduler.minimum-allocation-mb}} as incremental one because there is 
> no incremental one for CS when it tried to normalize the resource requests.
> {code}
>  SchedulerUtils.normalizeRequest(amReq, scheduler.getResourceCalculator(),
>   scheduler.getClusterResource(),
>   scheduler.getMinimumResourceCapability(),
>   scheduler.getMaximumResourceCapability(),
>   scheduler.getMinimumResourceCapability());  --> incrementResource 
> should be passed here.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5797) Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches

2016-11-02 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630595#comment-15630595
 ] 

Chris Trezzo commented on YARN-5797:


Note that the patch exposes the following metrics about the cache cleanup:
# cacheSizeBeforeClean - The local cache size (public and private) before clean 
in Bytes
# totalBytesDeleted - # of total bytes deleted from the public and private 
local cache
# publicBytesDeleted - # of bytes deleted from the public local cache
# privateBytesDeleted - # of bytes deleted from the private local cache

{{LocalCacheCleanerStats}} also exposes the individual amounts deleted (in 
bytes) from each user private cache. I wasn't quite sure of a good way to 
expose this via metrics, so I left it out of the current patch.

> Add metrics to the node manager for cleaning the PUBLIC and PRIVATE caches
> --
>
> Key: YARN-5797
> URL: https://issues.apache.org/jira/browse/YARN-5797
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Chris Trezzo
>Assignee: Chris Trezzo
> Attachments: YARN-5797-trunk-v1.patch
>
>
> Add new metrics to the node manager around the local cache sizes and how much 
> is being cleaned from them on a regular bases. For example, we can expose 
> information contained in the {{LocalCacheCleanerStats}} class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-11-02 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-2995:
-
Attachment: all-nodes.png

Attaching new screenshot after some final fixes.

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch, all-nodes.png, all-nodes.png, opp-container.png
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-11-02 Thread Konstantinos Karanasos (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantinos Karanasos updated YARN-2995:
-
Attachment: YARN-2995.004.patch

Adding new version of the patch.
Rebased against trunk, fixed some more issues, and addressed the unit test 
failures.

Note that there is a javadoc issue regarding using '_' as an identifier" 
(related to Java 8). I did not fix that, because it is actually used in 
multiple classes in the Web UI, and I followed the same style as in the rest of 
the code. I assume this should be fixed in all places at some point.

> Enhance UI to show cluster resource utilization of various container types
> --
>
> Key: YARN-2995
> URL: https://issues.apache.org/jira/browse/YARN-2995
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Sriram Rao
>Assignee: Konstantinos Karanasos
> Attachments: YARN-2995.001.patch, YARN-2995.002.patch, 
> YARN-2995.003.patch, YARN-2995.004.patch, all-nodes.png, all-nodes.png, 
> opp-container.png
>
>
> This JIRA proposes to extend the Resource manager UI to show how cluster 
> resources are being used to run *guaranteed start* and *queueable* 
> containers.  For example, a graph that shows over time, the fraction of  
> running containers that are *guaranteed start* and the fraction of running 
> containers that are *queueable*. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5780) [YARN-5079] Allowing YARN native services to post data to timeline service V.2

2016-11-02 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5780:

Summary: [YARN-5079] Allowing YARN native services to post data to timeline 
service V.2  (was: [YARN native service] Allowing YARN native services to post 
data to timeline service V.2)

> [YARN-5079] Allowing YARN native services to post data to timeline service V.2
> --
>
> Key: YARN-5780
> URL: https://issues.apache.org/jira/browse/YARN-5780
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Li Lu
>Assignee: Vrushali C
> Attachments: YARN-5780.poc.patch
>
>
> The basic end-to-end workflow of timeline service v.2 has been merged into 
> trunk. In YARN native services, we would like to post some service-specific 
> data to timeline v.2. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5552) Add Builder methods for common yarn API records

2016-11-02 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630820#comment-15630820
 ] 

Wangda Tan commented on YARN-5552:
--

[~Tao Jie] could you check Java docs warnings as well? Our policy is make sure 
no new java docs warnings added for committed patch. 

Thanks,

> Add Builder methods for common yarn API records
> ---
>
> Key: YARN-5552
> URL: https://issues.apache.org/jira/browse/YARN-5552
> Project: Hadoop YARN
>  Issue Type: Improvement
>Reporter: Arun Suresh
>Assignee: Tao Jie
> Attachments: YARN-5552.000.patch, YARN-5552.001.patch, 
> YARN-5552.002.patch, YARN-5552.003.patch, YARN-5552.004.patch, 
> YARN-5552.005.patch, YARN-5552.006.patch, YARN-5552.007.patch, 
> YARN-5552.008.patch
>
>
> Currently yarn API records such as ResourceRequest, AllocateRequest/Respone 
> as well as AMRMClient.ContainerRequest have multiple constructors / 
> newInstance methods. This makes it very difficult to add new fields to these 
> records.
> It would probably be better if we had Builder classes for many of these 
> records, which would make evolution of these records a bit easier.
> (suggested by [~kasha])



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5821) Drop left-over preemption-related code and clean up method visibilities in the Schedulable hierarchy

2016-11-02 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created YARN-5821:
--

 Summary: Drop left-over preemption-related code and clean up 
method visibilities in the Schedulable hierarchy
 Key: YARN-5821
 URL: https://issues.apache.org/jira/browse/YARN-5821
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: fairscheduler
Reporter: Karthik Kambatla
Assignee: Karthik Kambatla


There is some code left-over from old preemption. We need to drop that.

Also, looks like the visibilities in the {{Schedulable}} hierarchy need to be 
revisited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5821) Drop left-over preemption-related code and clean up method visibilities in the Schedulable hierarchy

2016-11-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5821:
---
Attachment: yarn-5821.YARN-4752.1.patch

Straight-forward patch. 

> Drop left-over preemption-related code and clean up method visibilities in 
> the Schedulable hierarchy
> 
>
> Key: YARN-5821
> URL: https://issues.apache.org/jira/browse/YARN-5821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5821.YARN-4752.1.patch
>
>
> There is some code left-over from old preemption. We need to drop that.
> Also, looks like the visibilities in the {{Schedulable}} hierarchy need to be 
> revisited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5821) Drop left-over preemption-related code and clean up method visibilities in the Schedulable hierarchy

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630888#comment-15630888
 ] 

Hadoop QA commented on YARN-5821:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} YARN-5821 does not apply to YARN-4752. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | YARN-5821 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836685/yarn-5821.YARN-4752.1.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/13759/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Drop left-over preemption-related code and clean up method visibilities in 
> the Schedulable hierarchy
> 
>
> Key: YARN-5821
> URL: https://issues.apache.org/jira/browse/YARN-5821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5821.YARN-4752.1.patch
>
>
> There is some code left-over from old preemption. We need to drop that.
> Also, looks like the visibilities in the {{Schedulable}} hierarchy need to be 
> revisited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5600) Add a parameter to ContainerLaunchContext to emulate yarn.nodemanager.delete.debug-delay-sec on a per-application basis

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630951#comment-15630951
 ] 

Daniel Templeton commented on YARN-5600:


Thanks for the patch, [~miklos.szeg...@cloudera.com].  Some comments:

* It seems to me that you're doing extra work to keep the delete time as a 
{{Date}}, not to mention adding potential time zone concerns.  Millis since the 
epoch may be simpler.
* Ignoring the {{IOException}} in 
{{ResourceLocalizationService.submitDirForDeletion()}} seems bad.  While you're 
in there, it might be good to do something more useful.
* In your javadoc, the param text should start with a lower case letter, e.g. 
{{DeletionService#deleteWithDelay()}}
* The {{DeletionService.scheduleFileDeletionTask()}} methods can and probably 
should be private.
* In your tests, instead of sleeping and asserting, sleep for short periods in 
a loop to minimize the test time.
* In {{TestContainerManager}} you have {code}-for (File f : new File[] { 
containerDir, containerSysDir }) {
+for (File f : new File[] {containerDir, containerSysDir }) {{code}  You 
may as well remove the trailing space as well.
* In {{TestContainerManager.verifyContainerDir()}}, your 
if-if-else-else-if-else would be cleaner as if-elseif-elseif-else.  Also, the 
messages could be a little more descriptive so that someone reading it without 
the source code has some clue what's happening.  And I don't think we need the 
exclamation points. :)

Otherwise, the general approach looks fine.

> Add a parameter to ContainerLaunchContext to emulate 
> yarn.nodemanager.delete.debug-delay-sec on a per-application basis
> ---
>
> Key: YARN-5600
> URL: https://issues.apache.org/jira/browse/YARN-5600
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: nodemanager
>Affects Versions: 3.0.0-alpha1
>Reporter: Daniel Templeton
>Assignee: Miklos Szegedi
>  Labels: oct16-medium
> Attachments: YARN-5600.000.patch, YARN-5600.001.patch, 
> YARN-5600.002.patch
>
>
> To make debugging application launch failures simpler, I'd like to add a 
> parameter to the CLC to allow an application owner to request delayed 
> deletion of the application's launch artifacts.
> This JIRA solves largely the same problem as YARN-5599, but for cases where 
> ATS is not in use, e.g. branch-2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5611) Provide an API to update lifetime of an application.

2016-11-02 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630958#comment-15630958
 ] 

Jian He commented on YARN-5611:
---

- can you add comments about the type of the timeout value ?
{code}
  /**
   * Set the ApplicationTimeouts for the application in seconds.
   * All pre-existing Map entries are cleared before adding the new Map.
   * 
{code}
- Revert RMAppEventType change
- for roll back, there's no need to use the future object. 
{code}
// do roll back
future = SettableFuture.create();
app.updateApplicationTimeout(RMAppUpdateType.ROLLBACK, newExpireTime,
currentApplicationTimeouts, future);
// Roll back can fail only when application is in completing state.
try {
  Futures.get(future, YarnException.class);
} catch (YarnException e) {
  LOG.warn("Roll back failed for an application "
  + app.getApplicationId() + " with message" + e.getMessage());
}
{code}
- fix indentation of second line
{code}
  for (Map.Entry timeout : 
  app.applicationTimeouts.entrySet()) {
{code}


> Provide an API to update lifetime of an application.
> 
>
> Key: YARN-5611
> URL: https://issues.apache.org/jira/browse/YARN-5611
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Rohith Sharma K S
>Assignee: Rohith Sharma K S
>  Labels: oct16-hard
> Attachments: 0001-YARN-5611.patch, 0002-YARN-5611.patch, 
> 0003-YARN-5611.patch, YARN-5611.0004.patch, YARN-5611.0005.patch, 
> YARN-5611.v0.patch
>
>
> YARN-4205 monitors an Lifetime of an applications is monitored if required. 
> Add an client api to update lifetime of an application. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4329) Allow fetching exact reason as to why a submitted app is in ACCEPTED state in Fair Scheduler

2016-11-02 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15630980#comment-15630980
 ] 

Daniel Templeton commented on YARN-4329:


Latest patch looks good to me, but Jenkins doesn't seem to like it.

> Allow fetching exact reason as to why a submitted app is in ACCEPTED state in 
> Fair Scheduler
> 
>
> Key: YARN-4329
> URL: https://issues.apache.org/jira/browse/YARN-4329
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler, resourcemanager
>Reporter: Naganarasimha G R
>Assignee: Yufei Gu
> Attachments: Screen Shot 2016-10-18 at 3.13.59 PM.png, 
> YARN-4329.001.patch, YARN-4329.002.patch, YARN-4329.003.patch, 
> YARN-4329.004.patch
>
>
> Similar to YARN-3946, it would be useful to capture possible reason why the 
> Application is in accepted state in FairScheduler



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-2995) Enhance UI to show cluster resource utilization of various container types

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-2995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15631014#comment-15631014
 ] 

Hadoop QA commented on YARN-2995:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 35s{color} | {color:orange} root: The patch generated 1 new + 430 unchanged 
- 4 fixed = 431 total (was 434) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
23s{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager
 generated 1 new + 235 unchanged - 0 fixed = 236 total (was 235) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 50s{color} 
| {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 38m 
57s{color} | {color:green} hadoop-yarn-server-resourcemanager in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
26s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 26s{color} 
| {color:red} hadoop-sls in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
27s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-2995 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836674/YARN-2995.004.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvn

[jira] [Updated] (YARN-5821) Drop left-over preemption-related code and clean up method visibilities in the Schedulable hierarchy

2016-11-02 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated YARN-5821:
---
Attachment: yarn-5821.YARN-4752.1.patch

> Drop left-over preemption-related code and clean up method visibilities in 
> the Schedulable hierarchy
> 
>
> Key: YARN-5821
> URL: https://issues.apache.org/jira/browse/YARN-5821
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: fairscheduler
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: yarn-5821.YARN-4752.1.patch, yarn-5821.YARN-4752.1.patch
>
>
> There is some code left-over from old preemption. We need to drop that.
> Also, looks like the visibilities in the {{Schedulable}} hierarchy need to be 
> revisited. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-02 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated YARN-5739:

Attachment: YARN-5739-YARN-5355.001.patch

First draft to perform a list operation to entities belong to the same 
application. 

> Provide timeline reader API to list available timeline entity types for one 
> application
> ---
>
> Key: YARN-5739
> URL: https://issues.apache.org/jira/browse/YARN-5739
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: timelinereader
>Reporter: Li Lu
>Assignee: Li Lu
> Attachments: YARN-5739-YARN-5355.001.patch
>
>
> Right now we only show a part of available timeline entity data in the new 
> YARN UI. However, some data (especially library specific data) are not 
> possible to be queried out by the web UI. It will be appealing for the UI to 
> provide an "entity browser" for each YARN application. Actually, simply 
> dumping out available timeline entities (with proper pagination, of course) 
> would be pretty helpful for UI users. 
> On timeline side, we're not far away from this goal. Right now I believe the 
> only thing missing is to list all available entity types within one 
> application. The challenge here is that we're not storing this data for each 
> application, but given this kind of call is relatively rare (compare to 
> writes and updates) we can perform some scanning during the read time. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5739) Provide timeline reader API to list available timeline entity types for one application

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15631098#comment-15631098
 ] 

Hadoop QA commented on YARN-5739:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
31s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
46s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
58s{color} | {color:green} YARN-5355 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} YARN-5355 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 4 new + 
29 unchanged - 1 fixed = 33 total (was 30) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
47s{color} | {color:green} hadoop-yarn-server-timelineservice in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m 
34s{color} | {color:green} hadoop-yarn-server-timelineservice-hbase-tests in 
the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5739 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836696/YARN-5739-YARN-5355.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 20f869487147 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-5355 / 513dcf6 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13760/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13760/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice
 
had

[jira] [Commented] (YARN-5821) Drop left-over preemption-related code and clean up method visibilities in the Schedulable hierarchy

2016-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15631141#comment-15631141
 ] 

Hadoop QA commented on YARN-5821:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
10s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
40s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
17s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
2s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} YARN-4752 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 20s{color} | {color:orange} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 30 unchanged - 16 fixed = 32 total (was 46) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager
 generated 0 new + 929 unchanged - 5 fixed = 929 total (was 934) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 59s{color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 54m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.applicationsmanager.TestAMRestart |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Issue | YARN-5821 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12836695/yarn-5821.YARN-4752.1.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 886b7c686d01 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-4752 / 5ad5085 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/13761/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/13761/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/13761/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-

[jira] [Created] (YARN-5822) Log ContainerRuntime initialization error in LinuxContainerExecutor

2016-11-02 Thread Sidharta Seethana (JIRA)
Sidharta Seethana created YARN-5822:
---

 Summary: Log ContainerRuntime initialization error in 
LinuxContainerExecutor 
 Key: YARN-5822
 URL: https://issues.apache.org/jira/browse/YARN-5822
 Project: Hadoop YARN
  Issue Type: Task
  Components: nodemanager
Reporter: Sidharta Seethana
Assignee: Sidharta Seethana
Priority: Trivial


LinuxContainerExecutor does not log information corresponding to 
ContainerRuntime initialization failure. This makes it hard to identify the 
root cause for Nodemanager start failure. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5391) PolicyManager to tie together Router/AMRM Federation policies

2016-11-02 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15631152#comment-15631152
 ] 

Carlo Curino commented on YARN-5391:


Thanks [~subru] for prompt reviewing and commit.

> PolicyManager to tie together Router/AMRM Federation policies
> -
>
> Key: YARN-5391
> URL: https://issues.apache.org/jira/browse/YARN-5391
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Affects Versions: YARN-2915
>Reporter: Carlo Curino
>Assignee: Carlo Curino
>  Labels: oct16-hard
> Fix For: YARN-2915
>
> Attachments: YARN-5391-YARN-2915.04.patch, 
> YARN-5391-YARN-2915.05.patch, YARN-5391-YARN-2915.06.patch, 
> YARN-5391-YARN-2915.07.patch, YARN-5391.01.patch, YARN-5391.02.patch, 
> YARN-5391.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5435) [Regression] QueueCapacities not being updated for dynamic ReservationQueue

2016-11-02 Thread Carlo Curino (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15631161#comment-15631161
 ] 

Carlo Curino commented on YARN-5435:


[~seanpo03] can you address the unit test issues?

> [Regression] QueueCapacities not being updated for dynamic ReservationQueue
> ---
>
> Key: YARN-5435
> URL: https://issues.apache.org/jira/browse/YARN-5435
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.8.0
>Reporter: Sean Po
>Assignee: Sean Po
>  Labels: oct16-easy, regression
> Attachments: YARN-5435.v003.patch, YARN-5435.v004.patch, 
> YARN-5435.v1.patch, YARN-5435.v2.patch
>
>
> YARN-1707 added dynamic queues (ReservationQueue) to CapacityScheduler. The 
> QueueCapacities data structure was added subsequently but is not being 
> updated correctly for ReservationQueue. This JIRA tracks the changes required 
> to update QueueCapacities of ReservationQueue correctly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5822) Log ContainerRuntime initialization error in LinuxContainerExecutor

2016-11-02 Thread Sidharta Seethana (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sidharta Seethana updated YARN-5822:

Attachment: YARN-5822.001.patch

Uploading a quick patch to log the container runtime initialization failure. 



> Log ContainerRuntime initialization error in LinuxContainerExecutor 
> 
>
> Key: YARN-5822
> URL: https://issues.apache.org/jira/browse/YARN-5822
> Project: Hadoop YARN
>  Issue Type: Task
>  Components: nodemanager
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>Priority: Trivial
> Attachments: YARN-5822.001.patch
>
>
> LinuxContainerExecutor does not log information corresponding to 
> ContainerRuntime initialization failure. This makes it hard to identify the 
> root cause for Nodemanager start failure. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



  1   2   >