[jira] [Commented] (YARN-5333) Some recovered apps are put into default queue when RM HA

2016-08-05 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410467#comment-15410467
 ] 

Jun Gong commented on YARN-5333:


Thanks [~rohithsharma], [~jianhe] and [~sunilg] 

> Some recovered apps are put into default queue when RM HA
> -
>
> Key: YARN-5333
> URL: https://issues.apache.org/jira/browse/YARN-5333
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5333.01.patch, YARN-5333.02.patch, 
> YARN-5333.03.patch, YARN-5333.04.patch, YARN-5333.05.patch, 
> YARN-5333.06.patch, YARN-5333.07.patch, YARN-5333.08.patch, 
> YARN-5333.09.patch, YARN-5333.10.patch
>
>
> Enable RM HA and use FairScheduler, 
> {{yarn.scheduler.fair.allow-undeclared-pools}} is set to false, 
> {{yarn.scheduler.fair.user-as-default-queue}} is set to false.
> Reproduce steps:
> 1. Start two RMs.
> 2. After RMs are running, change both RM's file 
> {{etc/hadoop/fair-scheduler.xml}}, then add some queues.
> 3. Submit some apps to the new added queues.
> 4. Stop the active RM, then the standby RM will transit to active and recover 
> apps.
> However the new active RM will put recovered apps into default queue because 
> it might have not loaded the new {{fair-scheduler.xml}}. We need call 
> {{initScheduler}} before start active services or bring {{refreshAll()}} in 
> front of {{rm.transitionToActive()}}. *It seems it is also important for 
> other scheduler*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-08-05 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410432#comment-15410432
 ] 

Brahma Reddy Battula commented on YARN-4624:


[~naganarasimha...@apache.org] thanks a lot for review and commit.. and thanks 
to [~sunilg] and others.

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.4.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-08-05 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410416#comment-15410416
 ] 

Naganarasimha G R commented on YARN-4624:
-

Thanks for the contributions [~brahma] & [~sunilg] and review from [~sunilg] 
,[~devaraj.k],  [~rohithsharma] & [~bibinchundatt].
Commited to 2.8, branch-2 & trunk

> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.4.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5408) Compose Federation membership/application/policy APIs into an uber FederationStateStore API

2016-08-05 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5408:
-
Description: This is a simple composition of the three APIs defined in 
YARN-3664, YARN-5307, YARN-3662. This is for convenience so that we provide a 
single API for both implementations and for consumers.  (was: This is a simple 
composition of the three APIs defined in YARN-3664, YARN-5307, YARN-3662)

> Compose Federation membership/application/policy APIs into an uber 
> FederationStateStore API
> ---
>
> Key: YARN-5408
> URL: https://issues.apache.org/jira/browse/YARN-5408
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
>
> This is a simple composition of the three APIs defined in YARN-3664, 
> YARN-5307, YARN-3662. This is for convenience so that we provide a single API 
> for both implementations and for consumers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5408) Compose Federation membership/application/policy APIs into an uber FederationStateStore API

2016-08-05 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-5408:
-
Summary: Compose Federation membership/application/policy APIs into an uber 
FederationStateStore API  (was: Define overall API for FederationStateStore)

> Compose Federation membership/application/policy APIs into an uber 
> FederationStateStore API
> ---
>
> Key: YARN-5408
> URL: https://issues.apache.org/jira/browse/YARN-5408
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
>
> This is a simple composition of the three APIs defined in YARN-3664, 
> YARN-5307, YARN-3662



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5457) Refactor DistributedScheduling framework to pull out common functionality

2016-08-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410356#comment-15410356
 ] 

Hadoop QA commented on YARN-5457:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 42s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
52s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 30s 
{color} | {color:red} root: The patch generated 21 new + 365 unchanged - 13 
fixed = 386 total (was 378) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
51s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} 
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common 
generated 2 new + 159 unchanged - 0 fixed = 161 total (was 159) {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 25s {color} 
| {color:red} hadoop-yarn-api in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 10s 
{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 37m 55s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 41s {color} 
| {color:red} hadoop-yarn-client in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 112m 15s 
{color} | {color:green} hadoop-mapreduce-client-jobclient in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
32s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 228m 17s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
|   | hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
|   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler |
|   | hadoop.yarn.client.api.impl.TestYarnClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 

[jira] [Updated] (YARN-5407) In-memory based implementation of the FederationApplicationStateStore, FederationPolicyStateStore

2016-08-05 Thread Ellen Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ellen Hui updated YARN-5407:

Description: YARN-5307 defines the FederationApplicationStateStore API. 
YARN-3664 defines the FederationPolicyStateStore API. This JIRA tracks an 
in-memory based implementation which is useful for both single-box testing and 
for future unit tests that depend on the state store.  (was: YARN-5307 defines 
the FederationApplicationStateStore API. This JIRA tracks an in-memory based 
implementation which is useful for both single-box testing and for future unit 
tests that depend on the state store.)

> In-memory based implementation of the FederationApplicationStateStore, 
> FederationPolicyStateStore
> -
>
> Key: YARN-5407
> URL: https://issues.apache.org/jira/browse/YARN-5407
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
> Attachments: YARN-5407-YARN-2915.v0.patch
>
>
> YARN-5307 defines the FederationApplicationStateStore API. YARN-3664 defines 
> the FederationPolicyStateStore API. This JIRA tracks an in-memory based 
> implementation which is useful for both single-box testing and for future 
> unit tests that depend on the state store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5408) Define overall API for FederationStateStore

2016-08-05 Thread Ellen Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ellen Hui updated YARN-5408:

Description: This is a simple composition of the three APIs defined in 
YARN-3664, YARN-5307, YARN-3662  (was: This is a simple composition of the 
three APIs defined in YARN-3664, YARN-5307, )

> Define overall API for FederationStateStore
> ---
>
> Key: YARN-5408
> URL: https://issues.apache.org/jira/browse/YARN-5408
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
>
> This is a simple composition of the three APIs defined in YARN-3664, 
> YARN-5307, YARN-3662



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5408) Define overall API for FederationStateStore

2016-08-05 Thread Ellen Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ellen Hui updated YARN-5408:

Description: This is a simple composition of the three APIs defined in 
YARN-3664, YARN-5307,   (was: YARN-3664 defines the FederationPolicyStore API. 
This JIRA tracks an in-memory based implementation which is useful for both 
single-box testing and for future unit tests that depend on the state store.)

> Define overall API for FederationStateStore
> ---
>
> Key: YARN-5408
> URL: https://issues.apache.org/jira/browse/YARN-5408
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
>
> This is a simple composition of the three APIs defined in YARN-3664, 
> YARN-5307, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5407) In-memory based implementation of the FederationApplicationStateStore, FederationPolicyStateStore

2016-08-05 Thread Ellen Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ellen Hui updated YARN-5407:

Summary: In-memory based implementation of the 
FederationApplicationStateStore, FederationPolicyStateStore  (was: In-memory 
based implementation of the FederationApplicationStateStore)

> In-memory based implementation of the FederationApplicationStateStore, 
> FederationPolicyStateStore
> -
>
> Key: YARN-5407
> URL: https://issues.apache.org/jira/browse/YARN-5407
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
> Attachments: YARN-5407-YARN-2915.v0.patch
>
>
> YARN-5307 defines the FederationApplicationStateStore API. This JIRA tracks 
> an in-memory based implementation which is useful for both single-box testing 
> and for future unit tests that depend on the state store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5408) Define overall API for FederationStateStore

2016-08-05 Thread Ellen Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ellen Hui updated YARN-5408:

Summary: Define overall API for FederationStateStore  (was: In-memory based 
implementation of the FederationPolicyStore)

> Define overall API for FederationStateStore
> ---
>
> Key: YARN-5408
> URL: https://issues.apache.org/jira/browse/YARN-5408
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
>
> YARN-3664 defines the FederationPolicyStore API. This JIRA tracks an 
> in-memory based implementation which is useful for both single-box testing 
> and for future unit tests that depend on the state store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4902) [Umbrella] Generalized and unified scheduling-strategies in YARN

2016-08-05 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410310#comment-15410310
 ] 

Konstantinos Karanasos commented on YARN-4902:
--

[~leftnoteasy]:

bq. I can understand your proposal may look different from my guess above, we 
can discuss more once you have a more concrete design for that.
Yes, let's discuss about service planning once we add more details in the 
design document -- it will be easier for other people to get involved in the 
discussion too.

bq. I'm not care too much about if we should support cardinality via GUTS API 
or support anti-affinity via cardinality syntaxes. We should choose a more 
generic/extensible API which can support both.
Sounds good, we can continue the discussion in YARN-5478.

> [Umbrella] Generalized and unified scheduling-strategies in YARN
> 
>
> Key: YARN-4902
> URL: https://issues.apache.org/jira/browse/YARN-4902
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Wangda Tan
> Attachments: Generalized and unified scheduling-strategies in YARN 
> -v0.pdf, LRA-scheduling-design.v0.pdf, YARN-5468.prototype.patch
>
>
> Apache Hadoop YARN's ResourceRequest mechanism is the core part of the YARN's 
> scheduling API for applications to use. The ResourceRequest mechanism is a 
> powerful API for applications (specifically ApplicationMasters) to indicate 
> to YARN what size of containers are needed, and where in the cluster etc.
> However a host of new feature requirements are making the API increasingly 
> more and more complex and difficult to understand by users and making it very 
> complicated to implement within the code-base.
> This JIRA aims to generalize and unify all such scheduling-strategies in YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5410) Bootstrap Router module

2016-08-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410299#comment-15410299
 ] 

Hadoop QA commented on YARN-5410:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
5s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 3s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
24s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 4s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
9s {color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
hadoop-yarn-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 24s 
{color} | {color:red} root: The patch generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
hadoop-yarn-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 20m 10s {color} 
| {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s 
{color} | {color:green} hadoop-yarn-server-router in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 2m 29s {color} 
| {color:red} hadoop-yarn-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 |
|   | 

[jira] [Commented] (YARN-4902) [Umbrella] Generalized and unified scheduling-strategies in YARN

2016-08-05 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410294#comment-15410294
 ] 

Wangda Tan commented on YARN-4902:
--

[~kkaranasos],

bq. LRA planning is much more than an implementation. Think of it as planning 
multiple applications at once. This is something that the scheduler cannot do, 
no matter what its implementation is...
I know the implementation of YARN-1051, I participated reviewing JIRAs of 
YARN-1051 from the beginning.
I would say we should have a unified API for client to use instead of continue 
adding new APIs. Breaking down APIs to different sets is especially bad for new 
feature incubation, this is one of the major reason we created YARN-4902. 
Planning more than one applications at once should be an enhancement inside 
scheduler instead of something visible by end user.
And I'm not sure if we should create a new LRA planner for that, if you're 
thinking to implement it using approach of YARN-1051, you may need to reserve a 
chunk of resources for "LRA scheduling". This approach may add additional 
latency to scheduling (since heavier computation) and reduce throughput (since 
additional resources need to be reserved). But this approach looks more 
reasonable while doing YARN-1051 -- it is straightforward reserve some 
resources for reservation system.
I also have mentioned partial-global-scheduling vs. global-optimal-scheduling 
in design doc of YARN-5139: 
https://issues.apache.org/jira/secure/attachment/12822180/YARN-5139-Global-Schedulingd-esign-and-implementation-notes.pdf.
 You can take a look at it if you're interested.
I can understand your proposal may look different from my guess above, we can 
discuss more once you have a more concrete design for that.
 
bq. I would say that it is the other way around..
I'm not care too much about if we should support cardinality via GUTS API or 
support anti-affinity via cardinality syntaxes. We should choose a more 
generic/extensible API which can support both.

And I just created a branch YARN-4902 and created sub task YARN-5478, we can 
discuss more about Java API definition in that JIRA.

> [Umbrella] Generalized and unified scheduling-strategies in YARN
> 
>
> Key: YARN-4902
> URL: https://issues.apache.org/jira/browse/YARN-4902
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Wangda Tan
> Attachments: Generalized and unified scheduling-strategies in YARN 
> -v0.pdf, LRA-scheduling-design.v0.pdf, YARN-5468.prototype.patch
>
>
> Apache Hadoop YARN's ResourceRequest mechanism is the core part of the YARN's 
> scheduling API for applications to use. The ResourceRequest mechanism is a 
> powerful API for applications (specifically ApplicationMasters) to indicate 
> to YARN what size of containers are needed, and where in the cluster etc.
> However a host of new feature requirements are making the API increasingly 
> more and more complex and difficult to understand by users and making it very 
> complicated to implement within the code-base.
> This JIRA aims to generalize and unify all such scheduling-strategies in YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5327) API changes required to support recurring reservations in the YARN ReservationSystem

2016-08-05 Thread Sean Po (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410280#comment-15410280
 ] 

Sean Po commented on YARN-5327:
---

Thanks [~ajsangeetha], this patch looks good to me, except for a few comments:

1. We need to add periodicity to REST API as well.
2. REST API changes will need a corresponding change to the documentation.* You 
can take a look at YARN-4683 for details. 

*It might make sense to add documentation of this API when the feature is code 
complete, similar to what was done for YARN-4340.



> API changes required to support recurring reservations in the YARN 
> ReservationSystem
> 
>
> Key: YARN-5327
> URL: https://issues.apache.org/jira/browse/YARN-5327
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Sangeetha Abdu Jyothi
> Attachments: YARN-5327.001.patch, YARN-5327.002.patch, 
> YARN-5327.003.patch
>
>
> YARN-5326 proposes adding native support for recurring reservations in the 
> YARN ReservationSystem. This JIRA is a sub-task to track the changes needed 
> in ApplicationClientProtocol to accomplish it. Please refer to the design doc 
> in the parent JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5478) [YARN-4902] Define Java API for generalized & unified scheduling-strategies.

2016-08-05 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-5478:


 Summary: [YARN-4902] Define Java API for generalized & unified 
scheduling-strategies.
 Key: YARN-5478
 URL: https://issues.apache.org/jira/browse/YARN-5478
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan


Define Java API for application to specify generic scheduling requirements 
described in YARN-4902 design doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4902) [Umbrella] Generalized and unified scheduling-strategies in YARN

2016-08-05 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410270#comment-15410270
 ] 

Konstantinos Karanasos commented on YARN-4902:
--

bq. LRA planning looks like an implementation.
LRA planning is much more than an implementation. Think of it as planning 
multiple applications at once. This is something that the scheduler cannot do, 
no matter what its implementation is.
Please give a look at YARN-1051 to see a similar use case for 
planning/admission control but in a constraint-free context.
I can give more details as I update the document. In any case, that does not 
block any of the changes that are required in the scheduler per se to support 
constraints.

bq. For cardinality, could you share a more detailed use case for that?
As you mention, an example would be to limit the number of hbase-masters in a 
node/rack or even the number of AMs in a node.
You could do it with resource isolation, but especially network isolation is 
really hard to get right, so until we reach that point, I think it would be 
great for applications to be able to express such constraints.

bq. It seems to me that cardinality is a special case of anti-affinity.
I would say that it is the other way around: affinity and anti-affinity is a 
special case of cardinality. If you say there is cardinality 1 for that node, 
it means you have anti-affinity for that node.
I agree that you can currently express it with your proposal, so we are just 
suggesting an alternative way that would be more succinct and we will not need 
to have different types of constraints, but just a single one.

> [Umbrella] Generalized and unified scheduling-strategies in YARN
> 
>
> Key: YARN-4902
> URL: https://issues.apache.org/jira/browse/YARN-4902
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Wangda Tan
> Attachments: Generalized and unified scheduling-strategies in YARN 
> -v0.pdf, LRA-scheduling-design.v0.pdf, YARN-5468.prototype.patch
>
>
> Apache Hadoop YARN's ResourceRequest mechanism is the core part of the YARN's 
> scheduling API for applications to use. The ResourceRequest mechanism is a 
> powerful API for applications (specifically ApplicationMasters) to indicate 
> to YARN what size of containers are needed, and where in the cluster etc.
> However a host of new feature requirements are making the API increasingly 
> more and more complex and difficult to understand by users and making it very 
> complicated to implement within the code-base.
> This JIRA aims to generalize and unify all such scheduling-strategies in YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5407) In-memory based implementation of the FederationApplicationStateStore

2016-08-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410256#comment-15410256
 ] 

Hadoop QA commented on YARN-5407:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
40s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
51s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common: 
The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 36s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822390/YARN-5407-YARN-2915.v0.patch
 |
| JIRA Issue | YARN-5407 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7730734b8100 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / a6a43c0 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12662/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12662/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12662/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> In-memory based implementation of the FederationApplicationStateStore
> -
>
> Key: YARN-5407
> 

[jira] [Commented] (YARN-4902) [Umbrella] Generalized and unified scheduling-strategies in YARN

2016-08-05 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410252#comment-15410252
 ] 

Wangda Tan commented on YARN-4902:
--

[~kkaranasos],

Thanks for replying. 

It's good to see that we're generally agree with the goal. LRA planning looks 
like an implementation, I'm not sure how you plan to do that, we can discuss 
more once you get prototype ready for that.
For cardinality, could you share a more detailed use case for that? For 
example, why limit #hbase-master within a rack, since different hbase instances 
will be used by different applications. If this is to reduce resource 
contention (like network resource), we may need to consider solving it by 
resource profile (YARN-3926) plus network isolation.
It seems to me that cardinality is a special case of anti-affinity. Typically 
anti-affinity kicks in when #container >= 1, if we can set the criteria from 1 
to N (N > 1), it is cardinality. If using syntax from our design doc, it looks 
like:
{code}
placement_strategy:{
   NOT{
  placement_set_type:rack​,
  allocation_tag: hbase_master
  maximum-number-container: 10
   }
 }
{code}

> [Umbrella] Generalized and unified scheduling-strategies in YARN
> 
>
> Key: YARN-4902
> URL: https://issues.apache.org/jira/browse/YARN-4902
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Wangda Tan
> Attachments: Generalized and unified scheduling-strategies in YARN 
> -v0.pdf, LRA-scheduling-design.v0.pdf, YARN-5468.prototype.patch
>
>
> Apache Hadoop YARN's ResourceRequest mechanism is the core part of the YARN's 
> scheduling API for applications to use. The ResourceRequest mechanism is a 
> powerful API for applications (specifically ApplicationMasters) to indicate 
> to YARN what size of containers are needed, and where in the cluster etc.
> However a host of new feature requirements are making the API increasingly 
> more and more complex and difficult to understand by users and making it very 
> complicated to implement within the code-base.
> This JIRA aims to generalize and unify all such scheduling-strategies in YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5410) Bootstrap Router module

2016-08-05 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5410:
---
Attachment: YARN-5410-YARN-2915-v4.patch

> Bootstrap Router module
> ---
>
> Key: YARN-5410
> URL: https://issues.apache.org/jira/browse/YARN-5410
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5410-YARN-2915-v1.patch, 
> YARN-5410-YARN-2915-v2.patch, YARN-5410-YARN-2915-v3.patch, 
> YARN-5410-YARN-2915-v4.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a new sub-module for the Router.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5407) In-memory based implementation of the FederationApplicationStateStore

2016-08-05 Thread Ellen Hui (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ellen Hui updated YARN-5407:

Attachment: YARN-5407-YARN-2915.v0.patch

> In-memory based implementation of the FederationApplicationStateStore
> -
>
> Key: YARN-5407
> URL: https://issues.apache.org/jira/browse/YARN-5407
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Ellen Hui
> Attachments: YARN-5407-YARN-2915.v0.patch
>
>
> YARN-5307 defines the FederationApplicationStateStore API. This JIRA tracks 
> an in-memory based implementation which is useful for both single-box testing 
> and for future unit tests that depend on the state store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4902) [Umbrella] Generalized and unified scheduling-strategies in YARN

2016-08-05 Thread Konstantinos Karanasos (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410209#comment-15410209
 ] 

Konstantinos Karanasos commented on YARN-4902:
--

Thanks for checking the design doc and the patch, and for the feedback, 
[~leftnoteasy].
Please find below some thoughts regarding the points your raised and some 
additional information.

bq. From the requirement's perspective, I didn't see new things, please remind 
me if I missed anything
Agreed that our basic requirements are similar, which is good because it means 
we are aligned. Some of the notions we are using might coincide with yours but 
have a different name (e.g., dynamic vs. allocation tags, although the scope of 
our dynamic tags is global and not application specific like yours), by virtue 
of the fact that we were designing things at the same time. We can agree on a 
common naming, not a problem.
What I would like to stretch as being different is mainly the LRA planning, 
some extensions to the constraints (along with a more succinct way of 
expressing them), as well as the ease of expressing inter-application 
constraints -- more details below.

*Constraints*
bq. The cardinality constraints is placement_set with maximum_concurrency 
constraint: see (4.3.3) Placement Strategy in my design doc.
If I am not wrong, the maxiumum_concurrency in your document corresponds to a 
single allocation/resource-request. Our min and max cardinality is across 
applications. For instance, in order to say "don't put more than 5 hbase 
servers (from any possible application) in a rack".

In general, as we showed in our design doc, you can use max and min 
cardinalities to also express affinity and anti-affinity constraints. This way 
we can have only a single type of constraints. What do you think?

bq. Will this patch support anti-affinity / affinity between apps? I uploaded 
my latest POC patch to YARN-1042, it supports affinity/anti-affinity for 
inter/intra apps. We can easily extend it to support intra/inter resource 
request within the app.
Yes, this is a major use case for us. The current patch can already support it. 
And this is why we want to make more use of the tags and of planning, since 
they would allow us to specify inter-app constraints without needing to know 
the app ID of the other job.

bq. Major logic of this patch depends on node label manager dynamic tag 
changes. First of all, I'm not sure if NLM works efficiently when node label 
changes rapidly (we could update label on node when allocate / release every 
container). And I'm not sure how you plan to avoid malicious application add 
labels. For example if a distributed shell application claims it is a "hbase 
master" just for fun, how to enforce cardinality logics like "only put 10 HBase 
masters in the rack"?
Good points.
For the scalability we have not seen any problems so far (we update tags at 
allocate/release), but we have not run very large-scale experiments -- I will 
update you on that.
For the malicious AM, I am not sure if the application would benefit from 
lying. But even if it does, we can use cluster-wide constraints to limit such 
AMs. Still, I agree more thought has to be given on this matter -- it's good 
you brought it up.

*Scheduling*
bq. It might be better to implement complex scheduling logics like 
affinity-between-apps and cardinality in a global scheduling way. (YARN-5139)
We will be more than happy to use any advancement in the scheduler that is 
available!
I totally believe that global scheduling (i.e., have an application-centric 
rather than node-centric scheduling) is much more appropriate and will give 
better results. We did not use it in our first patch, as it was not available, 
but we are happy to try it out.

*Planning*
bq. I'm not sure how LRA planner will look like, should it be a separate 
scheduler running in parallel? I didn't see your patch uses that approach.
The idea here is to be able to do more holistic placement decisions across 
applications. What if you place your HBase service in a way that does not let a 
subsequent Heron app be placed in the cluster at all?
We envision it to be outside of the scheduler, similar to the reservation 
system (YARN-1051).
Applications will also be able to submit multiple applications at once, and 
specify constraints among them.
It is not in the initial version of the patch.

*Suggestions*
bq. Could you take a look at global scheduling patch which I attached to 
YARN-5139 to see if it is possible to build new features added in your patch on 
top of the global scheduling framework? And also please share your thoughts 
about what's your overall feedbacks to the global scheduling framework like 
efficiency, extensibility, etc.
I will check the global scheduler, and as I said above, I'd be happy to use it.

bq. It will be better to design Java API for this ticket, both of our poc 
patches (this one and the 

[jira] [Commented] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-08-05 Thread Vrushali C (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410198#comment-15410198
 ] 

Vrushali C commented on YARN-5429:
--

Thanks [~varun_saxena] and [~templedf]. 

> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
> Fix For: 2.9.0
>
> Attachments: YARN-5429.01.patch, YARN-5429.02.patch, 
> YARN-5429.03.patch, YARN-5429.04.patch
>
>
> As part of YARN-4977, filing a subtask to fix a subset of the javadoc 
> warnings in yarn-api.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5477) ApplicationId should not be visible to client before NEW_SAVING state

2016-08-05 Thread Yesha Vora (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yesha Vora updated YARN-5477:
-
Description: 
we should not return applicationId to client before entering NEW_SAVING state. 

As per design, RM restart/failover is not supported when application is in NEW 
state. Thus, It makes sense to return appId to client after entering to 
NEW_SAVING state.



  was:
we should not return application to client before entering NEW_SAVING state. 

As per design, RM restart/failover is not supported when application is in NEW 
state. Thus, It makes sense to return appId to client after entering to 
NEW_SAVING state.




> ApplicationId should not be visible to client before NEW_SAVING state
> -
>
> Key: YARN-5477
> URL: https://issues.apache.org/jira/browse/YARN-5477
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Yesha Vora
>Priority: Critical
>
> we should not return applicationId to client before entering NEW_SAVING 
> state. 
> As per design, RM restart/failover is not supported when application is in 
> NEW state. Thus, It makes sense to return appId to client after entering to 
> NEW_SAVING state.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5477) ApplicationId should not be visible to client before NEW_SAVING state

2016-08-05 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-5477:


 Summary: ApplicationId should not be visible to client before 
NEW_SAVING state
 Key: YARN-5477
 URL: https://issues.apache.org/jira/browse/YARN-5477
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Yesha Vora
Priority: Critical


we should not return application to client before entering NEW_SAVING state. 

As per design, RM restart/failover is not supported when application is in NEW 
state. Thus, It makes sense to return appId to client after entering to 
NEW_SAVING state.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5476) Not existed application reported as ACCEPTED state by YarnClientImpl

2016-08-05 Thread Yesha Vora (JIRA)
Yesha Vora created YARN-5476:


 Summary: Not existed application reported as ACCEPTED state by 
YarnClientImpl
 Key: YARN-5476
 URL: https://issues.apache.org/jira/browse/YARN-5476
 Project: Hadoop YARN
  Issue Type: Bug
  Components: yarn
Reporter: Yesha Vora
Priority: Critical


Steps To reproduce: 

* Create a cluster with RM HA enabled
* Start a yarn application
* When yarn application is in NEW state, do RM failover. 

In this case, the application gets "ApplicationNotFound" exception from YARN. 
and it goes to accepted state and gets stuck. 

At this point, if yarn application -status  is run, it says that 
application is in ACCEPTED state. 
This state is misleading. 
{code}
hrt_qa@xxx:/root> yarn application -status application_1470379565464_0001
16/08/05 17:24:29 INFO impl.TimelineClientImpl: Timeline service address: 
https://xxx:8190/ws/v1/timeline/
16/08/05 17:24:30 INFO client.AHSProxy: Connecting to Application History 
server at xxx/xxx:10200
16/08/05 17:24:31 WARN retry.RetryInvocationHandler: Exception while invoking 
ApplicationClientProtocolPBClientImpl.getApplicationReport over rm1. Not 
retrying because try once and fail.
org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException: Application 
with id 'application_1470379565464_0001' doesn't exist in RM.
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:331)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:175)
at 
org.apache.hadoop.yarn.proto.ApplicationClientProtocol$ApplicationClientProtocolService$2.callBlockingMethod(ApplicationClientProtocol.java:417)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.instantiateException(RPCUtil.java:53)
at 
org.apache.hadoop.yarn.ipc.RPCUtil.unwrapAndThrowException(RPCUtil.java:101)
at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getApplicationReport(ApplicationClientProtocolPBClientImpl.java:194)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:278)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:194)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:176)
at com.sun.proxy.$Proxy18.getApplicationReport(Unknown Source)
at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getApplicationReport(YarnClientImpl.java:436)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.printApplicationReport(ApplicationCLI.java:481)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.run(ApplicationCLI.java:160)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at 
org.apache.hadoop.yarn.client.cli.ApplicationCLI.main(ApplicationCLI.java:83)
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.ApplicationNotFoundException):
 Application with id 'application_1470379565464_0001' doesn't exist in RM.
at 
org.apache.hadoop.yarn.server.resourcemanager.ClientRMService.getApplicationReport(ClientRMService.java:331)
at 
org.apache.hadoop.yarn.api.impl.pb.service.ApplicationClientProtocolPBServiceImpl.getApplicationReport(ApplicationClientProtocolPBServiceImpl.java:175)
at 

[jira] [Commented] (YARN-5410) Bootstrap Router module

2016-08-05 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410142#comment-15410142
 ] 

Subru Krishnan commented on YARN-5410:
--

[~giovanni.fumarola], I just committed all API patches & rebased branch 
YARN-2915 with trunk so can you try recreating the patch. Thanks.

> Bootstrap Router module
> ---
>
> Key: YARN-5410
> URL: https://issues.apache.org/jira/browse/YARN-5410
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5410-YARN-2915-v1.patch, 
> YARN-5410-YARN-2915-v2.patch, YARN-5410-YARN-2915-v3.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a new sub-module for the Router.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5457) Refactor DistributedScheduling framework to pull out common functionality

2016-08-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5457:
--
Attachment: YARN-5457.001.patch

Attaching initial patch. [~kkaranasos], let me know what you think

> Refactor DistributedScheduling framework to pull out common functionality
> -
>
> Key: YARN-5457
> URL: https://issues.apache.org/jira/browse/YARN-5457
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
> Attachments: YARN-5457.001.patch
>
>
> Opening this JIRA to track the some refactoring missed in YARN-5113:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5457) Refactor DistributedScheduling framework to pull out common functionality

2016-08-05 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated YARN-5457:
--
Assignee: Arun Suresh  (was: Konstantinos Karanasos)

> Refactor DistributedScheduling framework to pull out common functionality
> -
>
> Key: YARN-5457
> URL: https://issues.apache.org/jira/browse/YARN-5457
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>
> Opening this JIRA to track the some refactoring missed in YARN-5113:



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5410) Bootstrap Router module

2016-08-05 Thread Giovanni Matteo Fumarola (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410035#comment-15410035
 ] 

Giovanni Matteo Fumarola commented on YARN-5410:


[~subru] package-info does not need JavaDoc and findbugs is still failing. Can 
you take a look?

> Bootstrap Router module
> ---
>
> Key: YARN-5410
> URL: https://issues.apache.org/jira/browse/YARN-5410
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5410-YARN-2915-v1.patch, 
> YARN-5410-YARN-2915-v2.patch, YARN-5410-YARN-2915-v3.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a new sub-module for the Router.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5470) Differentiate exactly match with regex in yarn log CLI

2016-08-05 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15410028#comment-15410028
 ] 

Vinod Kumar Vavilapalli commented on YARN-5470:
---

Sorry for pitching in late - seems usual now a days.

{code}
+pw.println(" -regex  Work with -log_files to find 
matched");
+pw.println(" files by using java regex.");
{code}

Instead of this, why don't we make separate the options {{log_files}} and 
{{log_files_pattern}} so they are completely independent and the user can 
specify only one one of them ? {{-regex}} as a standalone boolean option 
doesn't look right.

> Differentiate exactly match with regex in yarn log CLI
> --
>
> Key: YARN-5470
> URL: https://issues.apache.org/jira/browse/YARN-5470
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5470.1.patch, YARN-5470.2.patch, YARN-5470.3.patch, 
> YARN-5470.3.patch
>
>
> Since YARN-5089, we support regular expression in YARN log CLI "-logFiles" 
> option. However, we should differentiate exactly match with regex match as 
> user could put something like "system.out" here which have different 
> semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4573) TestRMAppTransitions.testAppRunningKill and testAppKilledKilled fail on trunk

2016-08-05 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated YARN-4573:
-
Fix Version/s: (was: 2.9.0)
   2.7.4
   2.6.5
   2.8.0

Thanks, [~bwtakacy]!  I committed this to branch-2.8, branch-2.7, and 
branch-2.6 as well.


> TestRMAppTransitions.testAppRunningKill and testAppKilledKilled fail on trunk
> -
>
> Key: YARN-4573
> URL: https://issues.apache.org/jira/browse/YARN-4573
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager, test
>Reporter: Takashi Ohnishi
>Assignee: Takashi Ohnishi
>  Labels: jenkins
> Fix For: 2.8.0, 2.6.5, 2.7.4
>
> Attachments: YARN-4573.1.patch, YARN-4573.2.patch
>
>
> These tests often fails with 
> {code}
> testAppRunningKill[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions)
>   Time elapsed: 0.042 sec  <<< FAILURE!
> java.lang.AssertionError: application finish time is not greater then 0
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions.assertTimesAtFinish(TestRMAppTransitions.java:321)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions.assertKilled(TestRMAppTransitions.java:338)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions.testAppRunningKill(TestRMAppTransitions.java:760)
> testAppKilledKilled[0](org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions)
>   Time elapsed: 0.04 sec  <<< FAILURE!
> java.lang.AssertionError: application finish time is not greater then 0
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions.assertTimesAtFinish(TestRMAppTransitions.java:321)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.TestRMAppTransitions.testAppKilledKilled(TestRMAppTransitions.java:925)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4624) NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI

2016-08-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409980#comment-15409980
 ] 

Hudson commented on YARN-4624:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10227 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10227/])
YARN-4624. NPE in PartitionQueueCapacitiesInfo while accessing Schduler 
(naganarasimha_gr: rev d81b8163b4e5c0466a6af6e1068f512c5fd24a61)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/PartitionQueueCapacitiesInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/QueueCapacitiesInfo.java


> NPE in PartitionQueueCapacitiesInfo while accessing Schduler UI
> ---
>
> Key: YARN-4624
> URL: https://issues.apache.org/jira/browse/YARN-4624
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: SchedulerUIWithOutLabelMapping.png, YARN-2674-002.patch, 
> YARN-4624-003.patch, YARN-4624.4.patch, YARN-4624.patch
>
>
> Scenario:
> ===
> Configure nodelables and add to cluster
> Start the cluster
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.dao.PartitionQueueCapacitiesInfo.getMaxAMLimitPercentage(PartitionQueueCapacitiesInfo.java:114)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderQueueCapacityInfo(CapacitySchedulerPage.java:163)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.renderLeafQueueInfoWithPartition(CapacitySchedulerPage.java:105)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$LeafQueueInfoBlock.render(CapacitySchedulerPage.java:94)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueueBlock.render(CapacitySchedulerPage.java:293)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock$Block.subView(HtmlBlock.java:43)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet.Hamlet$LI._(Hamlet.java:7702)
>   at 
> org.apache.hadoop.yarn.server.resourcemanager.webapp.CapacitySchedulerPage$QueuesBlock.render(CapacitySchedulerPage.java:447)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4091) Add REST API to retrieve scheduler activity

2016-08-05 Thread Eric Payne (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409976#comment-15409976
 ] 

Eric Payne commented on YARN-4091:
--

bq. Any other suggestions? Sunil G / Eric Payne.
Latest patch LGTM +1. Thanks [~ChenGe] and [~leftnoteasy].

> Add REST API to retrieve scheduler activity
> ---
>
> Key: YARN-4091
> URL: https://issues.apache.org/jira/browse/YARN-4091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Sunil G
>Assignee: Chen Ge
> Fix For: 3.0.0-alpha2
>
> Attachments: Improvement on debugdiagnostic information - YARN.pdf, 
> SchedulerActivityManager-TestReport v2.pdf, 
> SchedulerActivityManager-TestReport.pdf, YARN-4091-design-doc-v1.pdf, 
> YARN-4091.1.patch, YARN-4091.2.patch, YARN-4091.3.patch, YARN-4091.4.patch, 
> YARN-4091.5.patch, YARN-4091.5.patch, YARN-4091.6.patch, YARN-4091.7.patch, 
> YARN-4091.8.patch, YARN-4091.preliminary.1.patch, app_activities v2.json, 
> app_activities.json, node_activities v2.json, node_activities.json
>
>
> As schedulers are improved with various new capabilities, more configurations 
> which tunes the schedulers starts to take actions such as limit assigning 
> containers to an application, or introduce delay to allocate container etc. 
> There are no clear information passed down from scheduler to outerworld under 
> these various scenarios. This makes debugging very tougher.
> This ticket is an effort to introduce more defined states on various parts in 
> scheduler where it skips/rejects container assignment, activate application 
> etc. Such information will help user to know whats happening in scheduler.
> Attaching a short proposal for initial discussion. We would like to improve 
> on this as we discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-08-05 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409965#comment-15409965
 ] 

Varun Saxena commented on YARN-5429:


Committed to trunk, branch-2.
Thanks [~vrushalic] for your contribution and [~templedf] for the reviews.

> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
> Fix For: 2.9.0
>
> Attachments: YARN-5429.01.patch, YARN-5429.02.patch, 
> YARN-5429.03.patch, YARN-5429.04.patch
>
>
> As part of YARN-4977, filing a subtask to fix a subset of the javadoc 
> warnings in yarn-api.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-08-05 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5429:
---
Fix Version/s: (was: 3.0.0-alpha2)
   2.9.0

> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
> Fix For: 2.9.0
>
> Attachments: YARN-5429.01.patch, YARN-5429.02.patch, 
> YARN-5429.03.patch, YARN-5429.04.patch
>
>
> As part of YARN-4977, filing a subtask to fix a subset of the javadoc 
> warnings in yarn-api.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3664) Federation PolicyStore internal APIs

2016-08-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409931#comment-15409931
 ] 

Hadoop QA commented on YARN-3664:
-

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
2s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
40s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 28s 
{color} | {color:green} hadoop-yarn-server-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 12m 59s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822356/YARN-3664-YARN-2915-v4.patch
 |
| JIRA Issue | YARN-3664 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  cc  |
| uname | Linux fb35e0cec049 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | YARN-2915 / f6c60fb |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-YARN-Build/12659/testReport/ |
| modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12659/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Federation PolicyStore internal APIs
> 
>
> Key: YARN-3664
> URL: https://issues.apache.org/jira/browse/YARN-3664
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan

[jira] [Updated] (YARN-3664) Federation PolicyStore internal APIs

2016-08-05 Thread Subru Krishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-3664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subru Krishnan updated YARN-3664:
-
Attachment: YARN-3664-YARN-2915-v4.patch

Same patch as before but after rebasing with YARN-5307. Trigger Yetus before 
committing.

> Federation PolicyStore internal APIs
> 
>
> Key: YARN-3664
> URL: https://issues.apache.org/jira/browse/YARN-3664
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-3664-YARN-2915-v0.patch, 
> YARN-3664-YARN-2915-v1.patch, YARN-3664-YARN-2915-v2.patch, 
> YARN-3664-YARN-2915-v3.patch, YARN-3664-YARN-2915-v4.patch
>
>
> The federation Policy Store contains information about the capacity 
> allocations made by users, their mapping to sub-clusters and the policies 
> that each of the components (Router, AMRMPRoxy, RMs) should enforce



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4888) Changes in scheduler to identify resource-requests explicitly by allocation-id

2016-08-05 Thread Subru Krishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409905#comment-15409905
 ] 

Subru Krishnan commented on YARN-4888:
--

Thanks [~asuresh] and [~leftnoteasy] for reviewing/committing this.

> Changes in scheduler to identify resource-requests explicitly by allocation-id
> --
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Fix For: 2.9.0
>
> Attachments: YARN-4888-WIP.patch, YARN-4888-v0.patch, 
> YARN-4888-v2.patch, YARN-4888-v3.patch, YARN-4888-v4.patch, 
> YARN-4888-v5.patch, YARN-4888-v6.patch, YARN-4888-v7.patch, 
> YARN-4888-v8.patch, YARN-4888.001.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5410) Bootstrap Router module

2016-08-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409885#comment-15409885
 ] 

Hadoop QA commented on YARN-5410:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 4m 46s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
0s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
25s {color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 5s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
11s {color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
hadoop-yarn-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} YARN-2915 passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 40s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 23s 
{color} | {color:red} root: The patch generated 1 new + 0 unchanged - 0 fixed = 
1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 5s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
hadoop-yarn-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-yarn-server-router in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 9s 
{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 55m 9s {color} 
| {color:red} hadoop-yarn-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-yarn-server-router in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 51s {color} 
| {color:red} hadoop-yarn-project in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
30s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 167m 30s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.yarn.server.nodemanager.TestDirectoryCollection |
|   | hadoop.yarn.server.resourcemanager.TestWorkPreservingRMRestart |
|   | 

[jira] [Commented] (YARN-5470) Differentiate exactly match with regex in yarn log CLI

2016-08-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409849#comment-15409849
 ] 

Hudson commented on YARN-5470:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10226 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10226/])
YARN-5470. Differentiate exactly match with regex in yarn log CLI. (junping_du: 
rev e605d47df05619c6b1c18aca59f709899498da75)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/cli/TestLogsCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/LogCLIHelpers.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/main/java/org/apache/hadoop/yarn/client/cli/LogsCLI.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/logaggregation/ContainerLogsRequest.java


> Differentiate exactly match with regex in yarn log CLI
> --
>
> Key: YARN-5470
> URL: https://issues.apache.org/jira/browse/YARN-5470
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5470.1.patch, YARN-5470.2.patch, YARN-5470.3.patch, 
> YARN-5470.3.patch
>
>
> Since YARN-5089, we support regular expression in YARN log CLI "-logFiles" 
> option. However, we should differentiate exactly match with regex match as 
> user could put something like "system.out" here which have different 
> semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5470) Differentiate exactly match with regex in yarn log CLI

2016-08-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409835#comment-15409835
 ] 

Junping Du commented on YARN-5470:
--

Just filed YARN-5475 to address unit test failure for TestAggregatedLogFormat.

> Differentiate exactly match with regex in yarn log CLI
> --
>
> Key: YARN-5470
> URL: https://issues.apache.org/jira/browse/YARN-5470
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5470.1.patch, YARN-5470.2.patch, YARN-5470.3.patch, 
> YARN-5470.3.patch
>
>
> Since YARN-5089, we support regular expression in YARN log CLI "-logFiles" 
> option. However, we should differentiate exactly match with regex match as 
> user could put something like "system.out" here which have different 
> semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5475) Test failed for TestAggregatedLogFormat on trunk

2016-08-05 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated YARN-5475:
-
Description: 
>From some jenkins run: 
>https://builds.apache.org/job/PreCommit-YARN-Build/12651/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
The error message for the test is:
{noformat}
Tests run: 3, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 1.114 sec <<< 
FAILURE! - in org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat
testReadAcontainerLogs1(org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat)
  Time elapsed: 0.012 sec  <<< ERROR!
java.io.IOException: Unable to create directory : 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestAggregatedLogFormat/testReadAcontainerLogs1/srcFiles/application_1_0001/container_1_0001_01_01/subDir
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.getOutputStreamWriter(TestAggregatedLogFormat.java:403)
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.writeSrcFile(TestAggregatedLogFormat.java:382)
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.testReadAcontainerLog(TestAggregatedLogFormat.java:211)
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.testReadAcontainerLogs1(TestAggregatedLogFormat.java:185)
{noformat}

  was:
Tests run: 3, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 1.114 sec <<< 
FAILURE! - in org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat
testReadAcontainerLogs1(org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat)
  Time elapsed: 0.012 sec  <<< ERROR!
java.io.IOException: Unable to create directory : 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestAggregatedLogFormat/testReadAcontainerLogs1/srcFiles/application_1_0001/container_1_0001_01_01/subDir
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.getOutputStreamWriter(TestAggregatedLogFormat.java:403)
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.writeSrcFile(TestAggregatedLogFormat.java:382)
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.testReadAcontainerLog(TestAggregatedLogFormat.java:211)
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.testReadAcontainerLogs1(TestAggregatedLogFormat.java:185)


> Test failed for TestAggregatedLogFormat on trunk
> 
>
> Key: YARN-5475
> URL: https://issues.apache.org/jira/browse/YARN-5475
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Junping Du
>
> From some jenkins run: 
> https://builds.apache.org/job/PreCommit-YARN-Build/12651/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
> The error message for the test is:
> {noformat}
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 1.114 sec <<< 
> FAILURE! - in org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat
> testReadAcontainerLogs1(org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat)
>   Time elapsed: 0.012 sec  <<< ERROR!
> java.io.IOException: Unable to create directory : 
> /testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestAggregatedLogFormat/testReadAcontainerLogs1/srcFiles/application_1_0001/container_1_0001_01_01/subDir
>   at 
> org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.getOutputStreamWriter(TestAggregatedLogFormat.java:403)
>   at 
> org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.writeSrcFile(TestAggregatedLogFormat.java:382)
>   at 
> org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.testReadAcontainerLog(TestAggregatedLogFormat.java:211)
>   at 
> org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.testReadAcontainerLogs1(TestAggregatedLogFormat.java:185)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5475) Test failed for TestAggregatedLogFormat on trunk

2016-08-05 Thread Junping Du (JIRA)
Junping Du created YARN-5475:


 Summary: Test failed for TestAggregatedLogFormat on trunk
 Key: YARN-5475
 URL: https://issues.apache.org/jira/browse/YARN-5475
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Junping Du


Tests run: 3, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 1.114 sec <<< 
FAILURE! - in org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat
testReadAcontainerLogs1(org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat)
  Time elapsed: 0.012 sec  <<< ERROR!
java.io.IOException: Unable to create directory : 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/TestAggregatedLogFormat/testReadAcontainerLogs1/srcFiles/application_1_0001/container_1_0001_01_01/subDir
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.getOutputStreamWriter(TestAggregatedLogFormat.java:403)
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.writeSrcFile(TestAggregatedLogFormat.java:382)
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.testReadAcontainerLog(TestAggregatedLogFormat.java:211)
at 
org.apache.hadoop.yarn.logaggregation.TestAggregatedLogFormat.testReadAcontainerLogs1(TestAggregatedLogFormat.java:185)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4888) Changes in scheduler to identify resource-requests explicitly by allocation-id

2016-08-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409811#comment-15409811
 ] 

Hudson commented on YARN-4888:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10225 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10225/])
YARN-4888. Changes in scheduler to identify resource-requests explicitly 
(wangda: rev 3f100d76ff5df020dbb8ecd4f5b4f9736a0a8270)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestAppSchedulingInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/RegularContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/MockAM.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fifo/FifoScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/utils/BuilderUtils.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/proto/yarn_protos.proto
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/SchedulerRequestKey.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/TestSchedulingWithAllocationRequestId.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/FSAppAttempt.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/scheduler/OpportunisticContainerAllocator.java


> Changes in scheduler to identify resource-requests explicitly by allocation-id
> --
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Fix For: 2.9.0
>
> Attachments: YARN-4888-WIP.patch, YARN-4888-v0.patch, 
> YARN-4888-v2.patch, YARN-4888-v3.patch, YARN-4888-v4.patch, 
> YARN-4888-v5.patch, YARN-4888-v6.patch, YARN-4888-v7.patch, 
> YARN-4888-v8.patch, YARN-4888.001.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-08-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409789#comment-15409789
 ] 

Hadoop QA commented on YARN-5382:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 214 unchanged - 1 fixed = 215 total (was 215) 
{color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 36m 50s {color} 
| {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 42s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822329/YARN-5382.13.patch |
| JIRA Issue | YARN-5382 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 39a6bb10d1ec 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d9a354c |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-YARN-Build/12658/artifact/patchprocess/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-YARN-Build/12658/artifact/patchprocess/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-YARN-Build/12658/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
| unit test logs |  

[jira] [Commented] (YARN-5470) Differentiate exactly match with regex in yarn log CLI

2016-08-05 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409780#comment-15409780
 ] 

Junping Du commented on YARN-5470:
--

The test failure is not related as I tried locally which get failed without 
this patch. Will file a separated issue to fix it. v3 patch LGTM. +1. Will 
commit it shortly.

> Differentiate exactly match with regex in yarn log CLI
> --
>
> Key: YARN-5470
> URL: https://issues.apache.org/jira/browse/YARN-5470
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5470.1.patch, YARN-5470.2.patch, YARN-5470.3.patch, 
> YARN-5470.3.patch
>
>
> Since YARN-5089, we support regular expression in YARN log CLI "-logFiles" 
> option. However, we should differentiate exactly match with regex match as 
> user could put something like "system.out" here which have different 
> semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-4888) Changes in scheduler to identify resource-requests explicitly by allocation-id

2016-08-05 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan updated YARN-4888:
-
Summary: Changes in scheduler to identify resource-requests explicitly by 
allocation-id  (was: Changes in RM container allocation for identifying 
resource-requests explicitly)

> Changes in scheduler to identify resource-requests explicitly by allocation-id
> --
>
> Key: YARN-4888
> URL: https://issues.apache.org/jira/browse/YARN-4888
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: resourcemanager
>Reporter: Subru Krishnan
>Assignee: Subru Krishnan
> Attachments: YARN-4888-WIP.patch, YARN-4888-v0.patch, 
> YARN-4888-v2.patch, YARN-4888-v3.patch, YARN-4888-v4.patch, 
> YARN-4888-v5.patch, YARN-4888-v6.patch, YARN-4888-v7.patch, 
> YARN-4888-v8.patch, YARN-4888.001.patch
>
>
> YARN-4879 puts forward the notion of identifying allocate requests 
> explicitly. This JIRA is to track the changes in RM app scheduling data 
> structures to accomplish it. Please refer to the design doc in the parent 
> JIRA for details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4091) Add REST API to retrieve scheduler activity

2016-08-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409772#comment-15409772
 ] 

Hudson commented on YARN-4091:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10224 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10224/])
YARN-4091. Add REST API to retrieve scheduler activity. (Chen Ge and (wangda: 
rev e0d131f055ee126052ad4d0f7b0d192e6c730188)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/AbstractContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/activities/ActivityState.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/activities/AllocationActivity.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/AbstractCSQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/activities/ActivitiesManager.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/activities/ActivityNode.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/AppAllocationInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacitySchedulerContext.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/NodeAllocationInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/dao/ActivitiesInfo.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesCapacitySched.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/TestRMWebServicesSchedulerActivities.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/webapp/RMWebServices.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/activities/AllocationState.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/activities/ActivityDiagnosticConstant.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/activities/NodeAllocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/common/fica/FiCaSchedulerApp.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/allocator/ContainerAllocator.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/activities/AppAllocation.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/activities/ActivitiesLogger.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/LeafQueue.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
* hadoop-yarn-project/hadoop-yarn/dev-support/findbugs-exclude.xml
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/ParentQueue.java
* 

[jira] [Comment Edited] (YARN-4902) [Umbrella] Generalized and unified scheduling-strategies in YARN

2016-08-05 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409750#comment-15409750
 ] 

Wangda Tan edited comment on YARN-4902 at 8/5/16 5:38 PM:
--

Thanks for sharing this, [~kkaranasos]/[~pg1...@imperial.ac.uk],

I took a quick look at the design doc and POC patch.

*For the design doc:*
1) From the requirement's perspective, I didn't see new things, please remind 
me if I missed anything
- The cardinality constraints is placement_set with maximum_concurrency 
constraint: see {{(4.3.3) Placement Strategy}} in my design doc.
- The dynamic tags for node properties (like hardware configs, etc.), is node 
constraints.
- The dynamic tags for application, is it as same as allocation tags? Using 
node label manager or not is a implementation decision to me

2) I'm not sure how LRA planner will look like, should it be a separate 
scheduler running in parallel? I didn't see your patch uses that approach.

*For the patch:*
3) It might be better to implement complex scheduling logics like 
affinity-between-apps and cardinality in a global scheduling way. (YARN-5139)

4) Will this patch support anti-affinity / affinity between apps? I uploaded my 
latest POC patch to YARN-1042, it supports affinity/anti-affinity for 
inter/intra apps. We can easily extend it to support intra/inter resource 
request within the app.

5) Major logic of this patch depends on node label manager dynamic tag changes. 
First of all, I'm not sure if NLM works efficiently when node label changes 
rapidly (we could update label on node when allocate / release every 
container). And I'm not sure how you plan to avoid malicious application add 
labels. For example if a distributed shell application claims it is a "hbase 
master" just for fun, how to enforce cardinality logics like "only put 10 HBase 
masters in the rack"?

*Suggestions*
- Could you take a look at global scheduling patch which I attached to 
YARN-5139 to see if it is possible to build new features added in your patch on 
top of the global scheduling framework? And also please share your thoughts 
about what's your overall feedbacks to the global scheduling framework like 
efficiency, extensibility, etc.
- It will be better to design Java API for this ticket, both of our poc patches 
(this one and the one I attached to YARN-1042) don't have a solid API 
definition. It is very important to define API first, could you help with API 
definition works?


was (Author: leftnoteasy):
Thanks for sharing this, [~kkaranasos]/[~pg1...@imperial.ac.uk],

I took a quick look at the design doc and POC patch.

*For the design doc:*
1) From the requirement's perspective, I didn't see new things, please remind 
me if I missed anything
- The cardinality constraints is placement_set with maximum_concurrency 
constraint: see {{(4.3.3) Placement Strategy}} in my design doc.
- The dynamic tags for node properties (like hardware configs, etc.), is node 
constraints.
- The dynamic tags for application, is it as same as allocation tags? Using 
node label manager or not is a implementation decision to me
2) I'm not sure how LRA planner will look like, should it be a separate 
scheduler running in parallel? I didn't see your patch uses that approach.

*For the patch:*
3) It might be better to implement complex scheduling logics like 
affinity-between-apps and cardinality in a global scheduling way. (YARN-5139)

4) Will this patch support anti-affinity / affinity between apps? I uploaded my 
latest POC patch to YARN-1042, it supports affinity/anti-affinity for 
inter/intra apps. We can easily extend it to support intra/inter resource 
request within the app.

5) Major logic of this patch depends on node label manager dynamic tag changes. 
First of all, I'm not sure if NLM works efficiently when node label changes 
rapidly (we could update label on node when allocate / release every 
container). And I'm not sure how you plan to avoid malicious application add 
labels. For example if a distributed shell application claims it is a "hbase 
master" just for fun, how to enforce cardinality logics like "only put 10 HBase 
masters in the rack"?

*Suggestions*
- Could you take a look at global scheduling patch which I attached to 
YARN-5139 to see if it is possible to build new features added in your patch on 
top of the global scheduling framework? And also please share your thoughts 
about what's your overall feedbacks to the global scheduling framework like 
efficiency, extensibility, etc.
- It will be better to design Java API for this ticket, both of our poc patches 
(this one and the one I attached to YARN-1042) don't have a solid API 
definition. It is very important to define API first, could you help with API 
definition works?

> [Umbrella] Generalized and unified scheduling-strategies in YARN
> 
>
> Key: 

[jira] [Commented] (YARN-4902) [Umbrella] Generalized and unified scheduling-strategies in YARN

2016-08-05 Thread Wangda Tan (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409750#comment-15409750
 ] 

Wangda Tan commented on YARN-4902:
--

Thanks for sharing this, [~kkaranasos]/[~pg1...@imperial.ac.uk],

I took a quick look at the design doc and POC patch.

*For the design doc:*
1) From the requirement's perspective, I didn't see new things, please remind 
me if I missed anything
- The cardinality constraints is placement_set with maximum_concurrency 
constraint: see {{(4.3.3) Placement Strategy}} in my design doc.
- The dynamic tags for node properties (like hardware configs, etc.), is node 
constraints.
- The dynamic tags for application, is it as same as allocation tags? Using 
node label manager or not is a implementation decision to me
2) I'm not sure how LRA planner will look like, should it be a separate 
scheduler running in parallel? I didn't see your patch uses that approach.

*For the patch:*
3) It might be better to implement complex scheduling logics like 
affinity-between-apps and cardinality in a global scheduling way. (YARN-5139)

4) Will this patch support anti-affinity / affinity between apps? I uploaded my 
latest POC patch to YARN-1042, it supports affinity/anti-affinity for 
inter/intra apps. We can easily extend it to support intra/inter resource 
request within the app.

5) Major logic of this patch depends on node label manager dynamic tag changes. 
First of all, I'm not sure if NLM works efficiently when node label changes 
rapidly (we could update label on node when allocate / release every 
container). And I'm not sure how you plan to avoid malicious application add 
labels. For example if a distributed shell application claims it is a "hbase 
master" just for fun, how to enforce cardinality logics like "only put 10 HBase 
masters in the rack"?

*Suggestions*
- Could you take a look at global scheduling patch which I attached to 
YARN-5139 to see if it is possible to build new features added in your patch on 
top of the global scheduling framework? And also please share your thoughts 
about what's your overall feedbacks to the global scheduling framework like 
efficiency, extensibility, etc.
- It will be better to design Java API for this ticket, both of our poc patches 
(this one and the one I attached to YARN-1042) don't have a solid API 
definition. It is very important to define API first, could you help with API 
definition works?

> [Umbrella] Generalized and unified scheduling-strategies in YARN
> 
>
> Key: YARN-4902
> URL: https://issues.apache.org/jira/browse/YARN-4902
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Vinod Kumar Vavilapalli
>Assignee: Wangda Tan
> Attachments: Generalized and unified scheduling-strategies in YARN 
> -v0.pdf, LRA-scheduling-design.v0.pdf, YARN-5468.prototype.patch
>
>
> Apache Hadoop YARN's ResourceRequest mechanism is the core part of the YARN's 
> scheduling API for applications to use. The ResourceRequest mechanism is a 
> powerful API for applications (specifically ApplicationMasters) to indicate 
> to YARN what size of containers are needed, and where in the cluster etc.
> However a host of new feature requirements are making the API increasingly 
> more and more complex and difficult to understand by users and making it very 
> complicated to implement within the code-base.
> This JIRA aims to generalize and unify all such scheduling-strategies in YARN.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5470) Differentiate exactly match with regex in yarn log CLI

2016-08-05 Thread Xuan Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409716#comment-15409716
 ] 

Xuan Gong commented on YARN-5470:
-

The testcase failures are not related.

> Differentiate exactly match with regex in yarn log CLI
> --
>
> Key: YARN-5470
> URL: https://issues.apache.org/jira/browse/YARN-5470
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Xuan Gong
>Assignee: Xuan Gong
> Attachments: YARN-5470.1.patch, YARN-5470.2.patch, YARN-5470.3.patch, 
> YARN-5470.3.patch
>
>
> Since YARN-5089, we support regular expression in YARN log CLI "-logFiles" 
> option. However, we should differentiate exactly match with regex match as 
> user could put something like "system.out" here which have different 
> semantics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5382) RM does not audit log kill request for active applications

2016-08-05 Thread Vrushali C (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vrushali C updated YARN-5382:
-
Attachment: YARN-5382.13.patch

Thanks [~jianhe], appreciate your time and review on this. 
Uploading patch 13 for trunk. 

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch, 
> YARN-5382-branch-2.7.09.patch, YARN-5382-branch-2.7.10.patch, 
> YARN-5382-branch-2.7.11.patch, YARN-5382-branch-2.7.12.patch, 
> YARN-5382.06.patch, YARN-5382.07.patch, YARN-5382.08.patch, 
> YARN-5382.09.patch, YARN-5382.10.patch, YARN-5382.11.patch, 
> YARN-5382.12.patch, YARN-5382.13.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-4091) Add REST API to retrieve scheduler activity

2016-08-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-4091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409700#comment-15409700
 ] 

Sunil G commented on YARN-4091:
---

Patch looks good for me too.. Thanks.

> Add REST API to retrieve scheduler activity
> ---
>
> Key: YARN-4091
> URL: https://issues.apache.org/jira/browse/YARN-4091
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: capacity scheduler, resourcemanager
>Affects Versions: 2.7.0
>Reporter: Sunil G
>Assignee: Chen Ge
> Attachments: Improvement on debugdiagnostic information - YARN.pdf, 
> SchedulerActivityManager-TestReport v2.pdf, 
> SchedulerActivityManager-TestReport.pdf, YARN-4091-design-doc-v1.pdf, 
> YARN-4091.1.patch, YARN-4091.2.patch, YARN-4091.3.patch, YARN-4091.4.patch, 
> YARN-4091.5.patch, YARN-4091.5.patch, YARN-4091.6.patch, YARN-4091.7.patch, 
> YARN-4091.8.patch, YARN-4091.preliminary.1.patch, app_activities v2.json, 
> app_activities.json, node_activities v2.json, node_activities.json
>
>
> As schedulers are improved with various new capabilities, more configurations 
> which tunes the schedulers starts to take actions such as limit assigning 
> containers to an application, or introduce delay to allocate container etc. 
> There are no clear information passed down from scheduler to outerworld under 
> these various scenarios. This makes debugging very tougher.
> This ticket is an effort to introduce more defined states on various parts in 
> scheduler where it skips/rejects container assignment, activate application 
> etc. Such information will help user to know whats happening in scheduler.
> Attaching a short proposal for initial discussion. We would like to improve 
> on this as we discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5394) Remove bind-mount /etc/passwd to Docker Container

2016-08-05 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409683#comment-15409683
 ] 

Daniel Templeton commented on YARN-5394:


Yep, +1.

> Remove bind-mount /etc/passwd to Docker Container
> -
>
> Key: YARN-5394
> URL: https://issues.apache.org/jira/browse/YARN-5394
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
> Attachments: YARN-5394.002.patch, YARN-5394.003.patch
>
>
> Current LCE (DockerLinuxContainerRuntime) is mounting /etc/passwd to the 
> container. And it seems uses wrong file name "/etc/password" for container.
> {panel}
> .addMountLocation("/etc/passwd", "/etc/password:ro");
> {panel}
> The biggest issue of bind-mount /etc/passwd is that it overrides the users 
> defined in Docker image which is not expected. Remove it won't affect 
> existing use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5333) Some recovered apps are put into default queue when RM HA

2016-08-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409650#comment-15409650
 ] 

Hudson commented on YARN-5333:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10223 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10223/])
YARN-5333. Some recovered apps are put into default queue when RM HA. 
(rohithsharmaks: rev d9a354c2f39274b2810144d1ae133201e44e3bfc)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/TestFairScheduler.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/AdminService.java


> Some recovered apps are put into default queue when RM HA
> -
>
> Key: YARN-5333
> URL: https://issues.apache.org/jira/browse/YARN-5333
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Fix For: 2.9.0
>
> Attachments: YARN-5333.01.patch, YARN-5333.02.patch, 
> YARN-5333.03.patch, YARN-5333.04.patch, YARN-5333.05.patch, 
> YARN-5333.06.patch, YARN-5333.07.patch, YARN-5333.08.patch, 
> YARN-5333.09.patch, YARN-5333.10.patch
>
>
> Enable RM HA and use FairScheduler, 
> {{yarn.scheduler.fair.allow-undeclared-pools}} is set to false, 
> {{yarn.scheduler.fair.user-as-default-queue}} is set to false.
> Reproduce steps:
> 1. Start two RMs.
> 2. After RMs are running, change both RM's file 
> {{etc/hadoop/fair-scheduler.xml}}, then add some queues.
> 3. Submit some apps to the new added queues.
> 4. Stop the active RM, then the standby RM will transit to active and recover 
> apps.
> However the new active RM will put recovered apps into default queue because 
> it might have not loaded the new {{fair-scheduler.xml}}. We need call 
> {{initScheduler}} before start active services or bring {{refreshAll()}} in 
> front of {{rm.transitionToActive()}}. *It seems it is also important for 
> other scheduler*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5333) Some recovered apps are put into default queue when RM HA

2016-08-05 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409608#comment-15409608
 ] 

Rohith Sharma K S commented on YARN-5333:
-

Thanks Sunil and Jian, I will commit it shortly.

> Some recovered apps are put into default queue when RM HA
> -
>
> Key: YARN-5333
> URL: https://issues.apache.org/jira/browse/YARN-5333
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5333.01.patch, YARN-5333.02.patch, 
> YARN-5333.03.patch, YARN-5333.04.patch, YARN-5333.05.patch, 
> YARN-5333.06.patch, YARN-5333.07.patch, YARN-5333.08.patch, 
> YARN-5333.09.patch, YARN-5333.10.patch
>
>
> Enable RM HA and use FairScheduler, 
> {{yarn.scheduler.fair.allow-undeclared-pools}} is set to false, 
> {{yarn.scheduler.fair.user-as-default-queue}} is set to false.
> Reproduce steps:
> 1. Start two RMs.
> 2. After RMs are running, change both RM's file 
> {{etc/hadoop/fair-scheduler.xml}}, then add some queues.
> 3. Submit some apps to the new added queues.
> 4. Stop the active RM, then the standby RM will transit to active and recover 
> apps.
> However the new active RM will put recovered apps into default queue because 
> it might have not loaded the new {{fair-scheduler.xml}}. We need call 
> {{initScheduler}} before start active services or bring {{refreshAll()}} in 
> front of {{rm.transitionToActive()}}. *It seems it is also important for 
> other scheduler*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5333) Some recovered apps are put into default queue when RM HA

2016-08-05 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409604#comment-15409604
 ] 

Sunil G commented on YARN-5333:
---

In that case, we could keep existing test case itself. +1 from my side.

> Some recovered apps are put into default queue when RM HA
> -
>
> Key: YARN-5333
> URL: https://issues.apache.org/jira/browse/YARN-5333
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5333.01.patch, YARN-5333.02.patch, 
> YARN-5333.03.patch, YARN-5333.04.patch, YARN-5333.05.patch, 
> YARN-5333.06.patch, YARN-5333.07.patch, YARN-5333.08.patch, 
> YARN-5333.09.patch, YARN-5333.10.patch
>
>
> Enable RM HA and use FairScheduler, 
> {{yarn.scheduler.fair.allow-undeclared-pools}} is set to false, 
> {{yarn.scheduler.fair.user-as-default-queue}} is set to false.
> Reproduce steps:
> 1. Start two RMs.
> 2. After RMs are running, change both RM's file 
> {{etc/hadoop/fair-scheduler.xml}}, then add some queues.
> 3. Submit some apps to the new added queues.
> 4. Stop the active RM, then the standby RM will transit to active and recover 
> apps.
> However the new active RM will put recovered apps into default queue because 
> it might have not loaded the new {{fair-scheduler.xml}}. We need call 
> {{initScheduler}} before start active services or bring {{refreshAll()}} in 
> front of {{rm.transitionToActive()}}. *It seems it is also important for 
> other scheduler*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5428) Allow for specifying the docker client configuration directory

2016-08-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409602#comment-15409602
 ] 

Allen Wittenauer commented on YARN-5428:


There's a bunch of different settings in play here. Some are more appropriate 
for the user to provide (credentials) and others are more appropriate for the 
admin to provide (e.g., proxy).  This is especially true in a split network 
design. It really sounds like there needs to be a merge operation before 
getting sent to the cluster to actually use.  

Where's the design doc for all this work?  It really feels like stuff is just 
getting committed without any long term goal or design in place that has been 
shared.

> Allow for specifying the docker client configuration directory
> --
>
> Key: YARN-5428
> URL: https://issues.apache.org/jira/browse/YARN-5428
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
> Attachments: YARN-5428.001.patch, YARN-5428.002.patch, 
> YARN-5428.003.patch, YARN-5428.004.patch
>
>
> The docker client allows for specifying a configuration directory that 
> contains the docker client's configuration. It is common to store "docker 
> login" credentials in this config, to avoid the need to docker login on each 
> cluster member. 
> By default the docker client config is $HOME/.docker/config.json on Linux. 
> However, this does not work with the current container executor user 
> switching and it may also be desirable to centralize this configuration 
> beyond the single user's home directory.
> Note that the command line arg is for the configuration directory NOT the 
> configuration file.
> This change will be needed to allow YARN to automatically pull images at 
> localization time or within container executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5474) Typo mistake in AMRMClient#getRegisteredTimeineClient API

2016-08-05 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409599#comment-15409599
 ] 

Rohith Sharma K S commented on YARN-5474:
-

sure, pls go ahead

> Typo mistake in AMRMClient#getRegisteredTimeineClient API
> -
>
> Key: YARN-5474
> URL: https://issues.apache.org/jira/browse/YARN-5474
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Naganarasimha G R
>Priority: Trivial
>  Labels: newbie
>
> Just found that typo mistake in the API, It can be fixed since ATS is not 
> release in any version.
> {code}
>   /**
>* Get registered timeline client.
>* @return the registered timeline client
>*/
>   public TimelineClient getRegisteredTimeineClient() {
> return this.timelineClient;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-08-05 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409598#comment-15409598
 ] 

Rohith Sharma K S commented on YARN-5287:
-

+1 LGTM

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287-tmp.patch, YARN-5287.003.patch, 
> YARN-5287.004.patch, YARN-5287.005.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5459) Add support for docker rm

2016-08-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409593#comment-15409593
 ] 

Allen Wittenauer commented on YARN-5459:


What's preventing a user from deleting a container that doesn't belong to them?

> Add support for docker rm
> -
>
> Key: YARN-5459
> URL: https://issues.apache.org/jira/browse/YARN-5459
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Minor
> Fix For: 2.9.0
>
> Attachments: YARN-5459.001.patch
>
>
> Add support for the docker rm command to be used for cleaning up exited and 
> failed containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5410) Bootstrap Router module

2016-08-05 Thread Giovanni Matteo Fumarola (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated YARN-5410:
---
Attachment: YARN-5410-YARN-2915-v3.patch

> Bootstrap Router module
> ---
>
> Key: YARN-5410
> URL: https://issues.apache.org/jira/browse/YARN-5410
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: nodemanager, resourcemanager
>Reporter: Subru Krishnan
>Assignee: Giovanni Matteo Fumarola
> Attachments: YARN-5410-YARN-2915-v1.patch, 
> YARN-5410-YARN-2915-v2.patch, YARN-5410-YARN-2915-v3.patch
>
>
> As detailed in the proposal in the umbrella JIRA, we are introducing a new 
> component that routes client request to appropriate ResourceManager(s). This 
> JIRA tracks the creation of a new sub-module for the Router.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-3611) Support Docker Containers In LinuxContainerExecutor

2016-08-05 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-3611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409592#comment-15409592
 ] 

Allen Wittenauer commented on YARN-3611:


Why is this not being done in a branch?  

> Support Docker Containers In LinuxContainerExecutor
> ---
>
> Key: YARN-3611
> URL: https://issues.apache.org/jira/browse/YARN-3611
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: yarn
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
>
> Support Docker Containers In LinuxContainerExecutor
> LinuxContainerExecutor provides useful functionality today with respect to 
> localization, cgroups based resource management and isolation for CPU, 
> network, disk etc. as well as security with a well-defined mechanism to 
> execute privileged operations using the container-executor utility.  Bringing 
> docker support to LinuxContainerExecutor lets us use all of this 
> functionality when running docker containers under YARN, while not requiring 
> users and admins to configure and use a different ContainerExecutor. 
> There are several aspects here that need to be worked through :
> * Mechanism(s) to let clients request docker-specific functionality - we 
> could initially implement this via environment variables without impacting 
> the client API.
> * Security - both docker daemon as well as application
> * Docker image localization
> * Running a docker container via container-executor as a specified user
> * “Isolate” the docker container in terms of CPU/network/disk/etc
> * Communicating with and/or signaling the running container (ensure correct 
> pid handling)
> * Figure out workarounds for certain performance-sensitive scenarios like 
> HDFS short-circuit reads 
> * All of these need to be achieved without changing the current behavior of 
> LinuxContainerExecutor



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (YARN-5333) Some recovered apps are put into default queue when RM HA

2016-08-05 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated YARN-5333:
--
Comment: was deleted

(was: Im fine with that, thx)

> Some recovered apps are put into default queue when RM HA
> -
>
> Key: YARN-5333
> URL: https://issues.apache.org/jira/browse/YARN-5333
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5333.01.patch, YARN-5333.02.patch, 
> YARN-5333.03.patch, YARN-5333.04.patch, YARN-5333.05.patch, 
> YARN-5333.06.patch, YARN-5333.07.patch, YARN-5333.08.patch, 
> YARN-5333.09.patch, YARN-5333.10.patch
>
>
> Enable RM HA and use FairScheduler, 
> {{yarn.scheduler.fair.allow-undeclared-pools}} is set to false, 
> {{yarn.scheduler.fair.user-as-default-queue}} is set to false.
> Reproduce steps:
> 1. Start two RMs.
> 2. After RMs are running, change both RM's file 
> {{etc/hadoop/fair-scheduler.xml}}, then add some queues.
> 3. Submit some apps to the new added queues.
> 4. Stop the active RM, then the standby RM will transit to active and recover 
> apps.
> However the new active RM will put recovered apps into default queue because 
> it might have not loaded the new {{fair-scheduler.xml}}. We need call 
> {{initScheduler}} before start active services or bring {{refreshAll()}} in 
> front of {{rm.transitionToActive()}}. *It seems it is also important for 
> other scheduler*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5333) Some recovered apps are put into default queue when RM HA

2016-08-05 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409576#comment-15409576
 ] 

Jian He commented on YARN-5333:
---

Im fine with that, thx

> Some recovered apps are put into default queue when RM HA
> -
>
> Key: YARN-5333
> URL: https://issues.apache.org/jira/browse/YARN-5333
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5333.01.patch, YARN-5333.02.patch, 
> YARN-5333.03.patch, YARN-5333.04.patch, YARN-5333.05.patch, 
> YARN-5333.06.patch, YARN-5333.07.patch, YARN-5333.08.patch, 
> YARN-5333.09.patch, YARN-5333.10.patch
>
>
> Enable RM HA and use FairScheduler, 
> {{yarn.scheduler.fair.allow-undeclared-pools}} is set to false, 
> {{yarn.scheduler.fair.user-as-default-queue}} is set to false.
> Reproduce steps:
> 1. Start two RMs.
> 2. After RMs are running, change both RM's file 
> {{etc/hadoop/fair-scheduler.xml}}, then add some queues.
> 3. Submit some apps to the new added queues.
> 4. Stop the active RM, then the standby RM will transit to active and recover 
> apps.
> However the new active RM will put recovered apps into default queue because 
> it might have not loaded the new {{fair-scheduler.xml}}. We need call 
> {{initScheduler}} before start active services or bring {{refreshAll()}} in 
> front of {{rm.transitionToActive()}}. *It seems it is also important for 
> other scheduler*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5333) Some recovered apps are put into default queue when RM HA

2016-08-05 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409577#comment-15409577
 ] 

Jian He commented on YARN-5333:
---

Im fine with that, thx

> Some recovered apps are put into default queue when RM HA
> -
>
> Key: YARN-5333
> URL: https://issues.apache.org/jira/browse/YARN-5333
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5333.01.patch, YARN-5333.02.patch, 
> YARN-5333.03.patch, YARN-5333.04.patch, YARN-5333.05.patch, 
> YARN-5333.06.patch, YARN-5333.07.patch, YARN-5333.08.patch, 
> YARN-5333.09.patch, YARN-5333.10.patch
>
>
> Enable RM HA and use FairScheduler, 
> {{yarn.scheduler.fair.allow-undeclared-pools}} is set to false, 
> {{yarn.scheduler.fair.user-as-default-queue}} is set to false.
> Reproduce steps:
> 1. Start two RMs.
> 2. After RMs are running, change both RM's file 
> {{etc/hadoop/fair-scheduler.xml}}, then add some queues.
> 3. Submit some apps to the new added queues.
> 4. Stop the active RM, then the standby RM will transit to active and recover 
> apps.
> However the new active RM will put recovered apps into default queue because 
> it might have not loaded the new {{fair-scheduler.xml}}. We need call 
> {{initScheduler}} before start active services or bring {{refreshAll()}} in 
> front of {{rm.transitionToActive()}}. *It seems it is also important for 
> other scheduler*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5287) LinuxContainerExecutor fails to set proper permission

2016-08-05 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409559#comment-15409559
 ] 

Naganarasimha G R commented on YARN-5287:
-

Thanks for the review [~vvasudev], Will commit this patch shortly !

> LinuxContainerExecutor fails to set proper permission
> -
>
> Key: YARN-5287
> URL: https://issues.apache.org/jira/browse/YARN-5287
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: nodemanager
>Affects Versions: 2.7.2
>Reporter: Ying Zhang
>Assignee: Ying Zhang
>Priority: Minor
> Attachments: YARN-5287-tmp.patch, YARN-5287.003.patch, 
> YARN-5287.004.patch, YARN-5287.005.patch
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> LinuxContainerExecutor fails to set the proper permissions on the local 
> directories(i.e., /hadoop/yarn/local/usercache/... by default) if the cluster 
> has been configured with a restrictive umask, e.g.: umask 077. Job failed due 
> to the following reason:
> Path /hadoop/yarn/local/usercache/ambari-qa/appcache/application_ has 
> permission 700 but needs permission 750



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5474) Typo mistake in AMRMClient#getRegisteredTimeineClient API

2016-08-05 Thread Naganarasimha G R (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409555#comment-15409555
 ] 

Naganarasimha G R commented on YARN-5474:
-

Simple fix if you not working will attach the patch shortly!

> Typo mistake in AMRMClient#getRegisteredTimeineClient API
> -
>
> Key: YARN-5474
> URL: https://issues.apache.org/jira/browse/YARN-5474
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Naganarasimha G R
>Priority: Trivial
>  Labels: newbie
>
> Just found that typo mistake in the API, It can be fixed since ATS is not 
> release in any version.
> {code}
>   /**
>* Get registered timeline client.
>* @return the registered timeline client
>*/
>   public TimelineClient getRegisteredTimeineClient() {
> return this.timelineClient;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Assigned] (YARN-5474) Typo mistake in AMRMClient#getRegisteredTimeineClient API

2016-08-05 Thread Naganarasimha G R (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naganarasimha G R reassigned YARN-5474:
---

Assignee: Naganarasimha G R

> Typo mistake in AMRMClient#getRegisteredTimeineClient API
> -
>
> Key: YARN-5474
> URL: https://issues.apache.org/jira/browse/YARN-5474
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Rohith Sharma K S
>Assignee: Naganarasimha G R
>Priority: Trivial
>  Labels: newbie
>
> Just found that typo mistake in the API, It can be fixed since ATS is not 
> release in any version.
> {code}
>   /**
>* Get registered timeline client.
>* @return the registered timeline client
>*/
>   public TimelineClient getRegisteredTimeineClient() {
> return this.timelineClient;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Created] (YARN-5474) Typo mistake in AMRMClient#getRegisteredTimeineClient API

2016-08-05 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-5474:
---

 Summary: Typo mistake in AMRMClient#getRegisteredTimeineClient API
 Key: YARN-5474
 URL: https://issues.apache.org/jira/browse/YARN-5474
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Rohith Sharma K S
Priority: Trivial


Just found that typo mistake in the API, It can be fixed since ATS is not 
release in any version.
{code}
  /**
   * Get registered timeline client.
   * @return the registered timeline client
   */
  public TimelineClient getRegisteredTimeineClient() {
return this.timelineClient;
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-08-05 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409336#comment-15409336
 ] 

Hudson commented on YARN-5429:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #10222 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10222/])
YARN-5429. Fix return related javadoc warnings in yarn-api (Vrushali C 
(varunsaxena: rev 4a26221021ec228a1726fd4905693473cd525796)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/FinishApplicationMasterResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/HAUtil.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/RegisterApplicationMasterResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetApplicationsRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/KillApplicationResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/ReservationListRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StartContainersResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/AllocateResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/ApplicationConstants.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/IncreaseContainersResourceResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/records/LocalResource.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/StopContainersResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetClusterNodesRequest.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetContainerStatusesResponse.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/api/protocolrecords/GetDelegationTokenResponse.java


> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5429.01.patch, YARN-5429.02.patch, 
> YARN-5429.03.patch, YARN-5429.04.patch
>
>
> As part of YARN-4977, filing a subtask to fix a subset of the javadoc 
> warnings in yarn-api.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5429) Fix @return related javadoc warnings in yarn-api

2016-08-05 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated YARN-5429:
---
Fix Version/s: (was: 2.9.0)
   3.0.0-alpha2

> Fix @return related javadoc warnings in yarn-api
> 
>
> Key: YARN-5429
> URL: https://issues.apache.org/jira/browse/YARN-5429
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Vrushali C
>Assignee: Vrushali C
> Fix For: 3.0.0-alpha2
>
> Attachments: YARN-5429.01.patch, YARN-5429.02.patch, 
> YARN-5429.03.patch, YARN-5429.04.patch
>
>
> As part of YARN-4977, filing a subtask to fix a subset of the javadoc 
> warnings in yarn-api.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5333) Some recovered apps are put into default queue when RM HA

2016-08-05 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409307#comment-15409307
 ] 

Rohith Sharma K S commented on YARN-5333:
-

Overall patch look clean now and good!! 
One thing is I personally feel in-favor of writing test in common to HA. If 
folks feels it is fine, I am fine to commit it. cc :-/ [~jianhe] 

> Some recovered apps are put into default queue when RM HA
> -
>
> Key: YARN-5333
> URL: https://issues.apache.org/jira/browse/YARN-5333
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5333.01.patch, YARN-5333.02.patch, 
> YARN-5333.03.patch, YARN-5333.04.patch, YARN-5333.05.patch, 
> YARN-5333.06.patch, YARN-5333.07.patch, YARN-5333.08.patch, 
> YARN-5333.09.patch, YARN-5333.10.patch
>
>
> Enable RM HA and use FairScheduler, 
> {{yarn.scheduler.fair.allow-undeclared-pools}} is set to false, 
> {{yarn.scheduler.fair.user-as-default-queue}} is set to false.
> Reproduce steps:
> 1. Start two RMs.
> 2. After RMs are running, change both RM's file 
> {{etc/hadoop/fair-scheduler.xml}}, then add some queues.
> 3. Submit some apps to the new added queues.
> 4. Stop the active RM, then the standby RM will transit to active and recover 
> apps.
> However the new active RM will put recovered apps into default queue because 
> it might have not loaded the new {{fair-scheduler.xml}}. We need call 
> {{initScheduler}} before start active services or bring {{refreshAll()}} in 
> front of {{rm.transitionToActive()}}. *It seems it is also important for 
> other scheduler*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5333) Some recovered apps are put into default queue when RM HA

2016-08-05 Thread Jun Gong (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409275#comment-15409275
 ] 

Jun Gong commented on YARN-5333:


Test case errors are not related, addressed in YARN-5157 and YARN-5057.

> Some recovered apps are put into default queue when RM HA
> -
>
> Key: YARN-5333
> URL: https://issues.apache.org/jira/browse/YARN-5333
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Jun Gong
>Assignee: Jun Gong
> Attachments: YARN-5333.01.patch, YARN-5333.02.patch, 
> YARN-5333.03.patch, YARN-5333.04.patch, YARN-5333.05.patch, 
> YARN-5333.06.patch, YARN-5333.07.patch, YARN-5333.08.patch, 
> YARN-5333.09.patch, YARN-5333.10.patch
>
>
> Enable RM HA and use FairScheduler, 
> {{yarn.scheduler.fair.allow-undeclared-pools}} is set to false, 
> {{yarn.scheduler.fair.user-as-default-queue}} is set to false.
> Reproduce steps:
> 1. Start two RMs.
> 2. After RMs are running, change both RM's file 
> {{etc/hadoop/fair-scheduler.xml}}, then add some queues.
> 3. Submit some apps to the new added queues.
> 4. Stop the active RM, then the standby RM will transit to active and recover 
> apps.
> However the new active RM will put recovered apps into default queue because 
> it might have not loaded the new {{fair-scheduler.xml}}. We need call 
> {{initScheduler}} before start active services or bring {{refreshAll()}} in 
> front of {{rm.transitionToActive()}}. *It seems it is also important for 
> other scheduler*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-1011) [Umbrella] Schedule containers based on utilization of currently allocated containers

2016-08-05 Thread Anshul Pundir (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-1011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409217#comment-15409217
 ] 

Anshul Pundir commented on YARN-1011:
-

Hi [~kasha],

This feature is of quite a bit of interest to the hadoop team at my company. 
Wondering if I can collaborate with you on this and help get this in ?

-Anshul

> [Umbrella] Schedule containers based on utilization of currently allocated 
> containers
> -
>
> Key: YARN-1011
> URL: https://issues.apache.org/jira/browse/YARN-1011
> Project: Hadoop YARN
>  Issue Type: New Feature
>Reporter: Arun C Murthy
>Assignee: Karthik Kambatla
> Attachments: patch-for-yarn-1011.patch, yarn-1011-design-v0.pdf, 
> yarn-1011-design-v1.pdf, yarn-1011-design-v2.pdf
>
>
> Currently RM allocates containers and assumes resources allocated are 
> utilized.
> RM can, and should, get to a point where it measures utilization of allocated 
> containers and, if appropriate, allocate more (speculative?) containers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5334) [YARN-3368] Introduce REFRESH button in various UI pages

2016-08-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409213#comment-15409213
 ] 

Hadoop QA commented on YARN-5334:
-

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 5s {color} 
| {color:red} Docker failed to build yetus/hadoop:6d3a5f5. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822268/YARN-5334-YARN-3368-0002.patch
 |
| JIRA Issue | YARN-5334 |
| Console output | 
https://builds.apache.org/job/PreCommit-YARN-Build/12656/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [YARN-3368] Introduce REFRESH button in various UI pages
> 
>
> Key: YARN-5334
> URL: https://issues.apache.org/jira/browse/YARN-5334
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5334-YARN-3368-0001.patch, 
> YARN-5334-YARN-3368-0002.patch
>
>
> It will be better if we have a common Refresh button in all pages to get the 
> latest information in all tables such as apps/nodes/queue etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Updated] (YARN-5334) [YARN-3368] Introduce REFRESH button in various UI pages

2016-08-05 Thread Sreenath Somarajapuram (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreenath Somarajapuram updated YARN-5334:
-
Attachment: YARN-5334-YARN-3368-0002.patch

[~sunilg]
- Attaching a fresh patch with blind unload of all loaded entities, hope it 
makes the refresh functionality fool-proof.

Additional changes to minimize code duplication:
- Created an abstract route class for adding common route functionalities.
- Created a breadcrumb-bar component, with breadcrumb and refresh button.

> [YARN-3368] Introduce REFRESH button in various UI pages
> 
>
> Key: YARN-5334
> URL: https://issues.apache.org/jira/browse/YARN-5334
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: webapp
>Reporter: Sunil G
>Assignee: Sreenath Somarajapuram
> Attachments: YARN-5334-YARN-3368-0001.patch, 
> YARN-5334-YARN-3368-0002.patch
>
>
> It will be better if we have a common Refresh button in all pages to get the 
> latest information in all tables such as apps/nodes/queue etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5394) Remove bind-mount /etc/passwd to Docker Container

2016-08-05 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409161#comment-15409161
 ] 

Varun Vasudev commented on YARN-5394:
-

+1. I'll commit this on Monday if no one objects.

> Remove bind-mount /etc/passwd to Docker Container
> -
>
> Key: YARN-5394
> URL: https://issues.apache.org/jira/browse/YARN-5394
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Zhankun Tang
>Assignee: Zhankun Tang
> Attachments: YARN-5394.002.patch, YARN-5394.003.patch
>
>
> Current LCE (DockerLinuxContainerRuntime) is mounting /etc/passwd to the 
> container. And it seems uses wrong file name "/etc/password" for container.
> {panel}
> .addMountLocation("/etc/passwd", "/etc/password:ro");
> {panel}
> The biggest issue of bind-mount /etc/passwd is that it overrides the users 
> defined in Docker image which is not expected. Remove it won't affect 
> existing use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org



[jira] [Commented] (YARN-5382) RM does not audit log kill request for active applications

2016-08-05 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/YARN-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15409011#comment-15409011
 ] 

Jian He commented on YARN-5382:
---

[~vrushalic], looks like you reformatted the whole RMAppImpl java file, and 
introduced a lot of format changes, would you revert those ?

> RM does not audit log kill request for active applications
> --
>
> Key: YARN-5382
> URL: https://issues.apache.org/jira/browse/YARN-5382
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: resourcemanager
>Affects Versions: 2.7.2
>Reporter: Jason Lowe
>Assignee: Vrushali C
> Attachments: YARN-5382-branch-2.7.01.patch, 
> YARN-5382-branch-2.7.02.patch, YARN-5382-branch-2.7.03.patch, 
> YARN-5382-branch-2.7.04.patch, YARN-5382-branch-2.7.05.patch, 
> YARN-5382-branch-2.7.09.patch, YARN-5382-branch-2.7.10.patch, 
> YARN-5382-branch-2.7.11.patch, YARN-5382-branch-2.7.12.patch, 
> YARN-5382.06.patch, YARN-5382.07.patch, YARN-5382.08.patch, 
> YARN-5382.09.patch, YARN-5382.10.patch, YARN-5382.11.patch, YARN-5382.12.patch
>
>
> ClientRMService will audit a kill request but only if it either fails to 
> issue the kill or if the kill is sent to an already finished application.  It 
> does not create a log entry when the application is active which is arguably 
> the most important case to audit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: yarn-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-issues-h...@hadoop.apache.org